Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Precise and reliable step timing in C# .NET/Mono

Tags:

c#

mono

timer

For a project I am working on, I need to execute some logic (almost) exactly 10 times per second. I am aware of the limitation of non-realtime operation systems, and an occasional margin of say 10-20% is ok; that is, an occasional delay of up to 120 ms between cycles is ok. However, it is important that I can absolutely guarantee periodic logic execution, and no delays outside the mentioned margin will occur. This seems hard to accomplish in C#.

My situation is as follows: some time after application startup, an event is triggered that will start the logic execution cycle. While that cycle runs, the program also handles other tasks such as communication, logging, etc. I need to be able to run the program both with .NET on Windows, and with Mono on Linux. This excludes importing winmm.dll as a possiblity to use its high precision timing functions.

What I tried so far:

  • Use a while loop, calculate needed remaining delay after logic execution using Stopwatch, then call Thread.Sleep with that amount of delay; this is very unreliable, and generally results in longer delay, and occasionally in very long ones
  • Use System.Threading.Timer; the callback is generally called every ~109 ms
  • Use System.Timers.Timer, which I believe is more appropriate, and set AutoReset to true; the Elapsed event is raised every ~109 ms.
  • Use a high precision timer, such as the ones that can be found here or here. However, this causes (as can be expected) very high cpu load, which is undesirable given my system design.

The best option so far seems using the System.Timers.Timer class. To correct for the mentioned 109 ms, I set the interval to 92ms (which seems very hacky...!). Then, in the event handler, I calcutate actually elapsed time using a Stopwatch, then execute my system logic based on that calculation.

In code:

var timer = new System.Timers.Timer(92);
timer.Elapsed += TimerElapsed;
timer.AutoReset = true;
timer.Start();
while (true){}

And the handler:

private void TimerElapsed(object sender, ElapsedEventArgs e)
{
    var elapsed = _watch.ElapsedMilliseconds;
    _watch.Restart();
    DoWork(elapsed);
}

However, even with this approach it occasionally happens that the event is triggered only after more than 200 ms, up to as much as > 500 ms (on Mono). This means I miss one or more cycles of logic execution, which is potentially harmful.

Is there a better way to deal with this? Or is this issue inherent to the way the OS works, and is there simply no more reliable way to do repetitive logic execution with steady intervals without high CPU loads?

like image 607
mennowo Avatar asked Oct 30 '22 05:10

mennowo


1 Answers

Meanwhile, I was able to largely solve the issue.

First off, I stand corrected on the CPU usage of the timers I referenced in the question. CPU usage was due to my own code, where I used a tight while loop.

Having found that, I was able to solve the issue by using two timers, and check for the type of environment during runtime, to decide which one to actually use. To check the environment, I use:

private static readonly bool IsPosixEnvironment = Path.DirectorySeparatorChar == '/';

which is typically true under Linux.

Now, it is possible to use two different timers, for example this one for Windows, and this one for linux, as follows:

if (IsPosixEnvironment)
{
    _linTimer = new PosixHiPrecTimer();
    _linTimer.Tick += LinTimerElapsed;
    _linTimer.Interval = _stepsize;
    _linTimer.Enabled = true;
}
else
{
    _winTimer = new WinHiPrecTimer();
    _winTimer.Elapsed += WinTimerElapsed;
    _winTimer.Interval = _stepsize;
    _winTimer.Resolution = 25;
    _winTimer.Start();
}

So far, this has given me good results; the step size is ussually in the 99-101 ms range, with the interval set to 100 ms. Also, and even more importantly for my purposes, there are no more longer intervals.

On a slower system (Raspberry Pi 1st gen model B), I still got occasional longer intervals, but I'd have to check overall effeciency first before drawing a conclusion there.

There is also this timer, which works out of the box under both operating systems. In a test program, compared to the one linked previously, this one caused a higher CPU load under Linux with Mono.

like image 105
mennowo Avatar answered Nov 17 '22 21:11

mennowo