For a project I am working on, I need to execute some logic (almost) exactly 10 times per second. I am aware of the limitation of non-realtime operation systems, and an occasional margin of say 10-20% is ok; that is, an occasional delay of up to 120 ms between cycles is ok. However, it is important that I can absolutely guarantee periodic logic execution, and no delays outside the mentioned margin will occur. This seems hard to accomplish in C#.
My situation is as follows: some time after application startup, an event is triggered that will start the logic execution cycle. While that cycle runs, the program also handles other tasks such as communication, logging, etc. I need to be able to run the program both with .NET on Windows, and with Mono on Linux. This excludes importing winmm.dll as a possiblity to use its high precision timing functions.
What I tried so far:
The best option so far seems using the System.Timers.Timer class. To correct for the mentioned 109 ms, I set the interval to 92ms (which seems very hacky...!). Then, in the event handler, I calcutate actually elapsed time using a Stopwatch, then execute my system logic based on that calculation.
In code:
var timer = new System.Timers.Timer(92);
timer.Elapsed += TimerElapsed;
timer.AutoReset = true;
timer.Start();
while (true){}
And the handler:
private void TimerElapsed(object sender, ElapsedEventArgs e)
{
var elapsed = _watch.ElapsedMilliseconds;
_watch.Restart();
DoWork(elapsed);
}
However, even with this approach it occasionally happens that the event is triggered only after more than 200 ms, up to as much as > 500 ms (on Mono). This means I miss one or more cycles of logic execution, which is potentially harmful.
Is there a better way to deal with this? Or is this issue inherent to the way the OS works, and is there simply no more reliable way to do repetitive logic execution with steady intervals without high CPU loads?
Meanwhile, I was able to largely solve the issue.
First off, I stand corrected on the CPU usage of the timers I referenced in the question. CPU usage was due to my own code, where I used a tight while loop.
Having found that, I was able to solve the issue by using two timers, and check for the type of environment during runtime, to decide which one to actually use. To check the environment, I use:
private static readonly bool IsPosixEnvironment = Path.DirectorySeparatorChar == '/';
which is typically true under Linux.
Now, it is possible to use two different timers, for example this one for Windows, and this one for linux, as follows:
if (IsPosixEnvironment)
{
_linTimer = new PosixHiPrecTimer();
_linTimer.Tick += LinTimerElapsed;
_linTimer.Interval = _stepsize;
_linTimer.Enabled = true;
}
else
{
_winTimer = new WinHiPrecTimer();
_winTimer.Elapsed += WinTimerElapsed;
_winTimer.Interval = _stepsize;
_winTimer.Resolution = 25;
_winTimer.Start();
}
So far, this has given me good results; the step size is ussually in the 99-101 ms range, with the interval set to 100 ms. Also, and even more importantly for my purposes, there are no more longer intervals.
On a slower system (Raspberry Pi 1st gen model B), I still got occasional longer intervals, but I'd have to check overall effeciency first before drawing a conclusion there.
There is also this timer, which works out of the box under both operating systems. In a test program, compared to the one linked previously, this one caused a higher CPU load under Linux with Mono.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With