Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to implement high speed, consistent sampling?

Tags:

python

The sort of application to have in mind is an oscilloscope or high speed data logger. I have a function which retrieves the required information, I just need to work out how to call it over and over again, very quickly and with high precision.

There are limitations to time.sleep(), I don't think that is the way to go.

I have looked into the built in event scheduler, but I don't think it's precise enough and doesn't quite fill my needs.

The requirements for this are:

  • High speed sampling. 10ms is the most that will be asked of it.
  • High accuracy intervals. At 10ms, a 10% error is acceptable (±1ms).
  • Fairly low CPU usage, some load is acceptable at 10ms, but it should be less than ~5% for 100ms intervals and beyond. I know this is subjective, I guess what I'm saying is that hogging the CPU is unacceptable.
  • Ideally, the timer will be initialised with an interval time, and then started when required. The required function should then be called at the correct interval over and over again until the timer is stopped.
  • It will (not must) only ever run on a Windows machine.

Are there any existing libraries that fulfil these requirements? I don't want to re-invent the wheel, but if I have to I will probably use the Windows multimedia timer (winmm.dll). Any comments/suggestions with that?

like image 343
Gareth Webber Avatar asked Jan 17 '23 14:01

Gareth Webber


1 Answers

I know I'm late to the game answering my own question, but hopefully it will help someone.

I wrote a wrapper to the Windows Multimedia Timer purely as a test. It seems to work well, but the code isn't fully tested and hasn't been optimized.

mmtimer.py:

from ctypes import *
from ctypes.wintypes import UINT
from ctypes.wintypes import DWORD

timeproc = WINFUNCTYPE(None, c_uint, c_uint, DWORD, DWORD, DWORD)
timeSetEvent = windll.winmm.timeSetEvent
timeKillEvent = windll.winmm.timeKillEvent


class mmtimer:
    def Tick(self):
        self.tickFunc()

        if not self.periodic:
            self.stop()

    def CallBack(self, uID, uMsg, dwUser, dw1, dw2):
        if self.running:
            self.Tick()

    def __init__(self, interval, tickFunc, stopFunc=None, resolution=0, periodic=True):
        self.interval = UINT(interval)
        self.resolution = UINT(resolution)
        self.tickFunc = tickFunc
        self.stopFunc = stopFunc
        self.periodic = periodic
        self.id = None
        self.running = False
        self.calbckfn = timeproc(self.CallBack)

    def start(self, instant=False):
        if not self.running:
            self.running = True
            if instant:
                self.Tick()

            self.id = timeSetEvent(self.interval, self.resolution,
                                   self.calbckfn, c_ulong(0),
                                   c_uint(self.periodic))

    def stop(self):
        if self.running:
            timeKillEvent(self.id)
            self.running = False

            if self.stopFunc:
                self.stopFunc()

Periodic test code:

from mmtimer import mmtimer
import time

def tick():
    print("{0:.2f}".format(time.clock() * 1000))

t1 = mmtimer(10, tick)
time.clock()
t1.start(True)
time.sleep(0.1)
t1.stop()

Output in milliseconds:

0.00
10.40
20.15
29.91
39.68
50.43
60.19
69.96
79.72
90.46
100.23

One-shot test code:

from mmtimer import mmtimer
import time

def tick():
    print("{0:.2f}".format(time.clock() * 1000))

t1 = mmtimer(150, tick, periodic=False)
time.clock()
t1.start()

Output in milliseconds:

150.17

As you can see from the results, it's pretty accurate. However, this is only using time.clock() so take them with a pinch of salt.

During a prolonged test with a 10ms periodic timer, CPU usage is around 3% or less on my old dual code 3GHz machine. The machine also seems to use that when it's idle though, so I'd say additional CPU usage is minimal.

like image 68
Gareth Webber Avatar answered Jan 28 '23 21:01

Gareth Webber