Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

1ms resolution timer under linux recommended way

Tags:

c

linux

timer

I need a timer tick with 1ms resolution under linux. It is used to increment a timer value that in turn is used to see if various Events should be triggered. The POSIX timerfd_create is not an option because of the glibc requirement. I tried timer_create and timer_settimer, but the best I get from them is a 10ms resolution, smaller values seem to default to 10ms resolution. Getittimer and setitimer have a 10 ms resolution according to the manpage.

The only way to do this timer I can currently think of is to use clock_gettime with CLOCK_MONOTONIC in my main loop an test if a ms has passed, and if so to increase the counter (and then check if the various Events should fire).

Is there a better way to do this than to constantly query in the main loop? What is the recommended solution to this?

The language I am using is plain old c

Update
I am using a 2.6.26 Kernel. I know you can have it interrupt at 1kHz, and the POSIX timer_* functions then can be programmed to up to 1ms but that seems not to be reliable and I don't want to use that, because it may need a new kernel on some Systems. Some stock Kernel seem to still have the 100Hz configured. And I would need to detect that. The application may be run on something else than my System :)

I can not sleep for 1ms because there may be network events I have to react to.

How I resolved it Since it is not that important I simply declared that the global timer has a 100ms resolution. All events using their own timer have to set at least 100ms for timer expiration. I was more or less wondering if there would be a better way, hence the question.

Why I accepted the answer I think the answer from freespace best described why it is not really possible without a realtime Linux System.

like image 346
Edger Avatar asked Oct 27 '08 14:10

Edger


People also ask

What is high resolution timer in linux?

The High Resolution Timers system allows a user space program to be wake up from a timer event with better accuracy, when using the POSIX timer APIs. Without this system, the best accuracy that can be obtained for timer events is 1 jiffy. This depends on the setting of HZ in the kernel.

Does linux have a timer?

Actually, the Linux kernel provides two types of timers called dynamic timers and interval timers. First type of timers is used by the kernel, and the second can be used by user mode. The timer_list structure contains actual dynamic timers.

How timers work in linux?

Timers are used to schedule execution of a function (a timer handler) at a particular time in the future. They thus work differently from task queues and tasklets in that you can specify when in the future your function will be called, whereas you can't tell exactly when a queued task will be executed.

What is timer interrupt in linux?

Timer interrupts are generated by the system's timing hardware at regular intervals; this interval is set by the kernel according to the value of HZ, which is an architecture-dependent value defined in <linux/param.


2 Answers

Polling in the main loop isn't an answer either - your process might not get much CPU time, so more than 10ms will elapse before your code gets to run, rendering it moot.

10ms is about the standard timer resolution for most non-realtime operating systems (RTOS). But it is moot in a non-RTOS - the behaviour of the scheduler and dispatcher is going to greatly influence how quickly you can respond to a timer expiring. For example even suppose you had a sub 10ms resolution timer, you can't respond to the timer expiring if your code isn't running. Since you can't predict when your code is going to run, you can't respond to timer expiration accurately.

There is of course realtime linux kernels, see http://www.linuxdevices.com/articles/AT8073314981.html for a list. A RTOS offers facilities whereby you can get soft or hard guarantees about when your code is going to run. This is about the only way to reliably and accurately respond to timers expiring etc.

like image 138
freespace Avatar answered Sep 16 '22 15:09

freespace


To get 1ms resolution timers do what libevent does.

Organize your timers into a min-heap, that is, the top of the heap is the timer with the earliest expiry (absolute) time (a rb-tree would also work but with more overhead). Before calling select() or epoll() in your main event loop calculate the delta in milliseconds between the expiry time of the earliest timer and now. Use this delta as the timeout to select(). select() and epoll() timeouts have 1ms resolution.

I've got a timer resolution test that uses the mechanism explained above (but not libevent). The test measures the difference between the desired timer expiry time and its actual expiry of 1ms, 5ms and 10ms timers:

1000 deviation samples of  1msec timer: min=  -246115nsec max=  1143471nsec median=   -70775nsec avg=      901nsec stddev=    45570nsec
1000 deviation samples of  5msec timer: min=  -265280nsec max=   256260nsec median=  -252363nsec avg=     -195nsec stddev=    30933nsec
1000 deviation samples of 10msec timer: min=  -273119nsec max=   274045nsec median=   103471nsec avg=     -179nsec stddev=    31228nsec
1000 deviation samples of  1msec timer: min=  -144930nsec max=  1052379nsec median=  -109322nsec avg=     1000nsec stddev=    43545nsec
1000 deviation samples of  5msec timer: min= -1229446nsec max=  1230399nsec median=  1222761nsec avg=      724nsec stddev=   254466nsec
1000 deviation samples of 10msec timer: min= -1227580nsec max=  1227734nsec median=    47328nsec avg=      745nsec stddev=   173834nsec
1000 deviation samples of  1msec timer: min=  -222672nsec max=   228907nsec median=    63635nsec avg=       22nsec stddev=    29410nsec
1000 deviation samples of  5msec timer: min= -1302808nsec max=  1270006nsec median=  1251949nsec avg=     -222nsec stddev=   345944nsec
1000 deviation samples of 10msec timer: min= -1297724nsec max=  1298269nsec median=  1254351nsec avg=     -225nsec stddev=   374717nsec

The test ran as a real-time process on Fedora 13 kernel 2.6.34, the best achieved precision of 1ms timer was avg=22nsec stddev=29410nsec.

like image 22
Maxim Egorushkin Avatar answered Sep 20 '22 15:09

Maxim Egorushkin