I'm doing heavy mathematical computations using Math.Net Numerics
parallely inside Parallel.For
block.
When I run code in my local system with 4 cores(2*2), it's using all 4 cores.
But when I run same code in our dev server with 8 cores(4*2), it's using only 4 cores.
I've tried setting MaxDegreeOfParallism,but couldn't help.
Any idea why all cores are not being utilised.
Below is sample code.
Parallel.For(0,10000,(i)=>
{
// heavy math computations using matrices
});
A single CPU core can have up-to 2 threads per core. For example, if a CPU is dual core (i.e., 2 cores) it will have 4 threads.
Yes, a single process can run multiple threads on different cores. Caching is specific to the hardware. Many modern Intel processors have three layers of caching, where the last level cache is shared across cores.
A single thread can only run on one core, but a processor can use dynamic analysis to work out which instructions being executed by a core do not depend on each other and execute these on different execution units simultaniously.
Cores increase the amount of work accomplished at a time, whereas threads improve throughput, computational speed-up. Cores is an actual hardware component whereas thread is a virtual component that manages the tasks. Cores use content switching while threads use multiple CPUs for operating numerous processes.
From MSDN
By default, For and ForEach will utilize however many threads the underlying scheduler provides, so changing MaxDegreeOfParallelism from the default only limits how many concurrent tasks will be used.
The way I read the documentation: if the underlying scheduler only offers a single thread, then setting MaxDegreeOfParallelism > 1
will still result in a single thread.
Parallelization is done runtime, based on the current conditions and a lots of other circumstances. You cannot force .NET to use all the cores (in managed code at least).
From MSDN:
"Conversely, by default, the Parallel.ForEach and Parallel.For methods can use a variable number of tasks. That's why, for example, the ParallelOptions class has a MaxDegreeOfParallelism property instead of a "MinDegreeOfParallelism" property. The idea is that the system can use fewer threads than requested to process a loop. The .NET thread pool adapts dynamically to changing workloads by allowing the number of worker threads for parallel tasks to change over time. At run time, the system observes whether increasing the number of threads improves or degrades overall throughput and adjusts the number of worker threads accordingly.
Be careful if you use parallel loops with individual steps that take several seconds or more. This can occur with I/O-bound workloads as well as lengthy calculations. If the loops take a long time, you may experience an unbounded growth of worker threads due to a heuristic for preventing thread starvation that's used by the .NET ThreadPool class's thread injection logic. "
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With