I've written a program which uses all available cores by using Parallel.ForEach
. The list for the ForEach
contains ~1000 objects and the computation for each object take some time (~10 sec).
In this scenario I setup a timer like this:
timer = new System.Timers.Timer();
timer.Elapsed += TimerHandler;
timer.Interval = 15000;
timer.Enabled = true;
private void TimerHandler(object source, ElapsedEventArgs e)
{
Console.WriteLine(DateTime.Now + ": Timer fired");
}
At the moment the TimerHandler
method is a stub to make sure the problem isn't caused by this method.
My expectation was that the TimerHandler
method will be executed every ~15 seconds. However, the time between two calls to this method even reaches 40 seconds, so 25 seconds too much.
By using new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount -1 }
for the Parallel.ForEach
method this doesn't happen and the expected interval of 15 seconds is seen.
Is it intended that I have to make sure that there is always one core available per active timer? Seems to be a bit odd even more, because the "reserved" could be a valuable resource for my computation.
Edit: As indicated by Yuval setting a fixed minimum of threads in the pool by ThreadPool.SetMinThreads
solved the problem. I also tried new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount }
(so without the -1
in the initial question) for the Parallel.ForEach
method and this also solves the problem. However, I have no good explanation why these modifications solved the problem. Maybe there were so many threads created that the timer thread just got "lost" for a "long" time until it was executed again.
The Parallel
class uses an internal TPL facility called self replicating tasks. They are meant to consume all available thread resources. I don't know what kind of limits are in place but it seems that it's all-consuming. I have answered basically the same question a few days ago.
The Parallel
class is prone to spawn insane amounts of tasks without any limit. It is easy to provoke it to literally spawn unlimited threads (2 per second). I consider the Parallel
class unusable without a manually specified max-DOP. It is a time bomb that explodes in production randomly under load.
Parallel
is especially poisonous in ASP.NET scenarios where many requests share one thread-pool.
Update: I forgot to make the key point. Timer ticks are queued to the thread pool. If the pool is saturated they get in line and execute at a later time. (This is the reason why timer ticks can happen concurrently or after timers are stopped.) This explains what you are seeing. The way to fix that is to fix the pool overloading.
The best fix for this particular scenario would be a custom task scheduler with a fixed amount of threads. Parallel
can be made to use that task scheduler. The Parallel Extension Extras have such a scheduler. Get that work off the globally shared thread pool. Normally, I would recomment PLINQ but that is not able to take a scheduler. In a sense both Parallel and PLINQ are needlessly crippled APIs.
Don't use ThreadPool.SetMinThreads
. Don't mess with global process wide settings. Just leave the poor thread pool alone.
Also, don't use Environment.ProcessorCount -1
because that wastes one core.
the timer is already executed in its own thread
The timer is a data structure in the OS kernel. There is no thread until a tick must be queued. Not sure how exactly that works but the tick is queued to the thread pool in .NET eventually. That's when the problem starts.
As a solution you could start a thread that sleeps in a loop in order to simulate a timer. That's a hack, though, because it does not fix the root cause: The overloaded thread pool.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With