Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sub-millisecond precision timing in C or C++

Tags:

c++

c

time

What techniques / methods exist for getting sub-millisecond precision timing data in C or C++, and what precision and accuracy do they provide? I'm looking for methods that don't require additional hardware. The application involves waiting for approximately 50 microseconds +/- 1 microsecond while some external hardware collects data.

EDIT: OS is Wndows, probably with VS2010. If I can get drivers and SDK's for the hardware on Linux, I can go there using the latest GCC.

like image 345
andand Avatar asked May 25 '10 13:05

andand


2 Answers

When dealing with off-the-shelf operating systems, accurate timing is an extremely difficult and involved task. If you really need guaranteed timing, the only real option is a full real-time operating system. However if "almost always" is good enough, here are a few tricks you can use that will provide good accuracy under commodity Windows & Linux

  1. Use a Sheilded CPU Basically, this means turn off IRQ affinity for a selected CPU & set the processor affinity mask for all other processes on the machine to ignore your targeted CPU. On your app, set the CPU affinity to run only on your shielded CPU. Effectively, this should prevent the OS from ever suspending your app as it will always be the only runnable process for that CPU.
  2. Never allow let your process willingly yield control to the OS (which is inherently non-deterministic for non realtime OSes). No memory allocation, no sockets, no mutexes, nada. Use the RDTSC to spin in a while loop waiting for your target time to arrive. It'll consume 100% CPU but it's the most accurate way to go.
  3. If number 2 is a bit too draconic, you can 'sleep short' and then burn the CPU up to your target time. Here, you take advantage of the fact that the OS schedules the CPU at set intervals. Usually 100 times per second or 1000 times per second depending on your OS and configuration (On windows you can change the default scheduling period of 100/s to 1000/s using the multimedia API). This can be a little hard to get right but essentially you need determine when the OS scheduling periods occur and calculate the one prior to your target wake time. Sleep for this duration and then, upon waking, spin on RDTSC (if you're on a single CPU... use QueryPerformanceCounter or the Linux equivalent if not) until your target time arrives. Occasionally, OS scheduling will cause you to miss but, generally speaking, this mechanism works pretty good.

It seems like a simple question, but attaining 'good' timing get's exponentially more difficult the tighter your timing constraints are. Good luck!

like image 64
Rakis Avatar answered Oct 11 '22 10:10

Rakis


The hardware (and therefore resolution) varies from machine to machine. On Windows, specifically (I'm not sure about other platforms), you can use QueryPerformanceCounter and QueryPerformanceFrequency, but be aware you should call both from the same thread and there are no strict guarantees about resolution (QueryPerformanceFrequency is allowed to return 0 meaning no high resolution timer is available). However, on most modern desktops, there should be one accurate to microseconds.

like image 24
AshleysBrain Avatar answered Oct 11 '22 08:10

AshleysBrain