I have a Parallel.For and a regular for loop doing some simple arithmetic, just to benchmark Parallel.For
My conclusion is that, the regular for is faster on my i5 notebook processor.
This is my code
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
int Iterations = int.MaxValue / 1000;
DateTime StartTime = DateTime.MinValue;
DateTime EndTime = DateTime.MinValue;
StartTime = DateTime.Now;
Parallel.For(0, Iterations, i =>
{
OperationDoWork(i);
});
EndTime = DateTime.Now;
Console.WriteLine(EndTime.Subtract(StartTime).ToString());
StartTime = DateTime.Now;
for (int i = 0; i < Iterations; i++)
{
OperationDoWork(i);
}
EndTime = DateTime.Now;
Console.WriteLine(EndTime.Subtract(StartTime).ToString());
StartTime = DateTime.Now;
Parallel.For(0, Iterations, i =>
{
OperationDoWork(i);
});
EndTime = DateTime.Now;
Console.WriteLine(EndTime.Subtract(StartTime).ToString());
StartTime = DateTime.Now;
for (int i = 0; i < Iterations; i++)
{
OperationDoWork(i);
}
EndTime = DateTime.Now;
Console.WriteLine(EndTime.Subtract(StartTime).ToString());
}
private static void OperationDoWork(int i)
{
int a = 0;
a += i;
i = a;
a *= 2;
a = a * a;
a = i;
}
}
}
And these are my results. Which on repetition do not change much:
00:00:03.9062234
00:00:01.7971028
00:00:03.2231844
00:00:01.7781017
So why ever use Parallel.For ?
Parallel processing has organization overhead. Think of it in terms of having 100 tasks and 10 people to do them. It's not easy to have 10 people working for you, just organizing who does what costs time in addition to actually doing the 100 tasks.
So if you want to do something in parallel, make sure it's so much work that the workload of organizing the parallelism is so small compared to the actual workload that it makes sense to do it.
One of the most common mistakes one does, when first delving into multithreading, is the belief, that multithreading is a Free Lunch.
In truth, splitting your operation into multiple smaller operations, which can then run in parallel, is going to take some extra time. If badly synchronized, your tasks may well be spending even more time, waiting for other tasks to release their locks.
As a result; parallelizing is not worth the time/trouble, when each task is going to do little work, which is the case with OperationDoWork
.
Consider trying this out:
private static void OperationDoWork(int i)
{
double a = 101.1D * i;
for (int k = 0; k < 100; k++)
a = Math.Pow(a, a);
}
According to my benchmark, for
will average to 5.7 seconds, while Parallel.For
will take 3.05 seconds on my Core2Duo CPU (speedup == ~1.87).
On my Quadcore i7, I get an average of 5.1 seconds with for
, and an average of 1.38 seconds with Parallel.For
(speedup == ~3.7).
This modified code scales very well to the number of physical cores available. Q.E.D.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With