To my understanding a Thread.Sleep(0) force a context switch on the OS.
I wanted to check what was the maximum amount of time that could pass in an application before to receive some CPU time.
So I built an application that does Thread.Sleep(0) in a while loop (c#) and calculate the time that pass between each call.
When this application is the only one running on a two core test PC the maximum observed time is right under 1 millisecond (with an average of 0.9 microsecond) and it use all the CPU available (100%).
When I run it along a CPU Filling dummy application (all with the same priority) the max time is around 25ms and the average time is 20ms. It behaves exactly like I expect it. And the time is very stable.
Whenever it gets some CPU time it immediately give the control back to whoever have some processing to do, it's like the hot potato game (CPU usage drops to 0%). If theres no other application running then the control comes back immediately.
Given this behavior I expected this application to have a minimal impact on a computer running real life application. (And to give me the actual "latency" I could expect to see in the applications running there). But to my surprise it did affect negatively (in an observable way) the performance of this specific system.
Am I missing some important point concerning Thread.Sleep(0)?
As a reference here's the code of this application
private bool _running = true;
private readonly Stopwatch _timer = new Stopwatch();
private double _maxTime;
private long _count;
private double _average;
private double _current;
public Form1()
{
InitializeComponent();
Thread t = new Thread(Run);
t.Start();
}
public void Run()
{
while(_running)
{
_timer.Start();
Thread.Sleep(0);
_timer.Stop();
_current = _timer.Elapsed.TotalMilliseconds;
_timer.Reset();
_count++;
_average = _average*((_count - 1.0)/_count) + _current*(1.0/_count);
if(_current>_maxTime)
{
_maxTime = _current;
}
}
}
Edited for clarity (purpose of the application): I am currently running a soft real-time multi-threaded application (well, group of applications) that needs to react to some inputs every roughly 300ms but we do miss some deadlines from time to time (less then 1% of the time) and I'm currently trying to improve that number.
I wanted to verify what is the current variability caused by other process on the same machine: I tough that by fitting the application written above on this semi real-time machine the maximum time observed would tell me what variability is caused by the system. I.E. I have 300ms but max observed time before a thread gets some CPU time is standing at 50ms, so to improve the performance I should set my processing time to a maximum of 250ms (since I might already be 50ms late).
Sleep(0) gives up CPU only to threads with equal or higher priorities.
This allows python to interrupt your input on events such as ctrl+c input, should they wish, without having to wait for the code to time out - which I think is quite neat. I should note that the same applies to time. sleep(0) except that the time parameter passed in is {0,0} .
According to MSDN's documentation for Sleep: A value of zero causes the thread to relinquish the remainder of its time slice to any other thread that is ready to run. If there are no other threads ready to run, the function returns immediately, and the thread continues execution.
If it is a UI worker thread, as long as they have some kind of progress indicator, anywhere up to half a second should be good enough. The UI should be responsive during the operation since its a background thread and you definitely have enough CPU time available to check every 500 ms.
It doesn't force a context switch, only Sleep(1) does that. But if there's any other thread from any process ready to run and has a higher priority then Sleep(0) will yield the processor and let it run. You can see this by running an endless loop that calls Sleep(0), it will burn 100% CPU cycles on one core. I don't understand why you don't observe this behavior.
The best way to keep the system responsive is by giving your thread a low priority.
My understanding is that Thread.Sleep(0) does not force a thread context switch, it simply signals the task scheduler that you are willing to give up the rest of your time slice if there are other threads waiting to execute.
Your loop around Sleep(0) is chewing up CPU time, and that will have a negative effect on other applications (and laptop battery life!). Sleep(0) doesn't mean "let everything else execute first", so your loop will be competing for execution time with other processes.
Passing a non-zero wait time to Sleep() would be marginally better for other apps because it would actually force this thread to be put aside for a minimum amount of time. But this is still not how you implement a minimum-impact background thread.
The best way to run a CPU bound background thread with minimum impact to foreground applications is to lower your thread priority to something below normal. This will tell the scheduler to execute all normal priority threads first, and if/when there is any other time available then execute your low priority thread. The side effect of this is that sometimes your low priority thread may not get any execution time at all for relatively long periods of time (seconds) depending on how saturated the CPU is.
I was bitten by this bug in a previous project. I had a thread running which would check messages in a prioritized queue, looking for a new one. If it found no new message, I wanted the thread to go to sleep until a message's being added to the queue would wake it back up to check again.
Naively, assuming that Thread.Sleep(0)
would cause the thread to go to sleep until woken up again, I found our app consuming crazy amounts of CPU once messages started coming in.
After a few days of sleuthing possible causes, we found the info from this link. The quick fix was to use Thread.Sleep(1)
. The link has the details around the reason for the difference, including a little test app at the bottom, demonstrating what happens to performance between the 2 options.
Most likely cause is that you aren't allowing the program to read instructions in an efficient manner.
When you invoke Sleep(0)
your code is suspended for a single cycle and then scheduled for the next available slot. However, this context switching isn't free - there are plenty of registers to save/load, and when you have a different sequence of instructions to read from disk you probably end up with quite a few cache misses. I can't imagine this having a significant impact on your application in most cases, but if you are working with a real-time system or something similarly intensive then it might be a possibility.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With