I am performing various experiments with an Ubuntu box running kernel 3.5.7. I warmup my benchmark code to 10 million iterations and then proceed to time 90 million iterations. Still I see the following jitter:
Average: 242 nanos | Min Time: 230 nanos | Max Time: 4717 nanos
0.75 = avg: 240, max: 246
0.9 = avg: 241, max: 247
0.99 = avg: 242, max: 250
0.999 = avg: 242, max: 251
0.9999 = avg: 242, max: 517
0.99999 = avg: 242, max: **2109** <===
0.999999 = avg: 242, max: **3724** <===
0.9999999 = avg: 242, max: **4424** <===
I see bad times on 0.01% of my iterations. Is it possible to make a Linux kernel really real-time? Is there something else happening in the kernel that I can't control?
for(int i = 0; i < iterations; i++) {
long start = get_nano_ts(&ts);
for(int j = 0; j < load; j++) {
long p = (i % 8) * (i % 16);
if (i % 2 == 0) {
x += p;
} else {
x -= p;
}
}
long end = get_nano_ts(&ts);
int res = end - start;
// store the results, calculate the percentiles, averages, min, max, etc.
}
The short answer is no. You would have to jettison everything else the kernel provides to guarantee your code gets 100% duty cycle.
If you want a guaranteed monopoly on the CPU, you need to use a Real Time Operating System that lets you disable all interrupts. Something like FreeRTOS or VXWorks.
The tickless kernel is designed for power savings when idle. It is not designed to disable interrupts entirely. All IO devices are constantly demanding interrupts. If you disabled all IO drivers and ran tickless and disabled all services that might periodically wake up, then you might get close to jitter free operation. But then you would have a brick.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With