I have a small problem regarding threading in C#.
For some reason, my thread speeds up from 32ms delay to 16ms delay when I open Chrome, when I close Chrome it goes back to 32ms. I'm using Thread.Sleep(1000 / 60)
for the delay.
Can somebody explain why this is happening, and maybe suggest a possible solution?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ConsoleApplication2
{
class Program
{
static bool alive;
static Thread thread;
static DateTime last;
static void Main(string[] args)
{
alive = true;
thread = new Thread(new ThreadStart(Loop));
thread.Start();
Console.ReadKey();
}
static void Loop()
{
last = DateTime.Now;
while (alive)
{
DateTime current = DateTime.Now;
TimeSpan span = current - last;
last = current;
Console.WriteLine("{0}ms", span.Milliseconds);
Thread.Sleep(1000 / 60);
}
}
}
}
If given a wait of 5000 Milliseconds(5 seconds) and an element just take just 1-2 seconds to load, script will still wait for another 3 seconds which is bad as it is unnecessarily increasing the execution time. So thread. sleep() increases the execution time in cases where elements are loaded in no due time.
Thread. sleep is bad! It blocks the current thread and renders it unusable for further work.
If it is a UI worker thread, as long as they have some kind of progress indicator, anywhere up to half a second should be good enough. The UI should be responsive during the operation since its a background thread and you definitely have enough CPU time available to check every 500 ms.
To be more clear: Threads consume no CPU at all while in not runnable state. Not even a tiny bit. This does not depend on implementation.
Just a post to confirm Matthew's correct answer. The accuracy of Thread.Sleep()
is affected by the clock interrupt rate on Windows. It by default ticks 64 times per second, once every 15.625 msec. A `Sleep() can only complete when such an interrupt occurs. The mental image here is the one induced by the word "sleep", the processor is in fact asleep and not executing code. Only that clock interrupt is going to wake it up again to resume executing your code.
Your choice of 1000/60
was a very unhappy one, that asks for 16 msec. Just a bit over 15.625
so you'll always wake back up at least 2 ticks later: 2 x 15.625 = 31 msec
. What you measured.
That interrupt rate is however not fixed, it can be altered by a program. It does so by calling CreateTimerQueueTimer()
or the legacy timeBeginPeriod()
. A browser in general has a need to do so. Something simple as animating a GIF requires a better timer since GIF frame times are specified with a unit of 10 msec. Or in general any multi-media related operation needs it.
A very ugly side-effect of a program doing this is that this increased clock interrupt rate has system-wide effects. Like it did in your program. Your timer suddenly got accurate and you actually got the sleep duration you asked for, 16 msec. So Chrome is changing the rate to, probably, 1000 ticks per second. The maximum supported. And good for business when you have a competing operating system.
You can avoid this problem by picking a sleep duration that's a closer match to the default interrupt rate. If you ask for 15 then you'll get 15.625 and Chrome cannot have an effect on that. 31 is the next sweet spot. Etcetera, integer multiples of 15.625 and rounded down.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With