I've been reading a paper on real-time systems using the Linux OS, and the term "scheduling jitter" is used repeatedly without definition.
What is scheduling jitter? What does it mean?
Jitter is the difference between subsequent periods of time for a given task. In a real time OS it is important to reduce jitter to an acceptable level for the application. Here is a picture of jitter.
Jitter is the irregularity of a time-based signal. For example, in networks, jitter would be the variability of the packet latency across a network. In scheduling, I'm assuming the jitter refers to inequality of slices of time allocated to processes.
Read more here http://en.wikipedia.org/wiki/Jitter
Scheduling jitter is the maximum variance in time expected for program execution period
This concept is very important in real-time simulation systems. My experience comes from over 30 years in the real-time simulation industry (mostly Flight Simulation). Ideally absolutely no jitter is desirable, and that is precisely the objective of hard real-time scheduling.
Suppose that for example a real-time simulation needs to execute a certain computer program at 400 Hz in order to produce a stable and accurate simulation of that subsystem. That means we need to expect that the system will execute the program once every 2.5 msec. To achieve that in a hard real-time system, high-resolution clocks are used to schedule that module at a high priority so that the jitter is nearly zero. If this were a soft real-time simulation, some higher amount of jitter would be expected. If the scheduling jitter was 0.1 msec, then the starting point for that program would every 2.5 msec +/- 0.1 msec (or less). That would be acceptable as long as it would never take longer than 2.3 msec to execute the program. Otherwise the program could "overrun". If that ever happens, then determinism is lost, and the simulation looses fidelity.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With