Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why C# arithmetic on double appears to be faster than arithmetic on long?

The surprising output of the below code shows arithmetic on double is almost 100% faster than on long:

Test_DivOperator Float arithmetic measured time: 15974.5024 ms.

Test_DivOperator Integer arithmetic measured time: 28548.183 ms.

The used build settings are .Net4.5 C# 5.0 (Platform target: x64)

The used hardware is Intel Core i5-2520M (Running Windows7 64Bit)

Note: the used operator (here is division) does affect on the results, division maximizes this observation

const int numOfIterations = 1; //this value takes memory access out of the game
const int numOfRepetitions = 500000000; //CPU bound application
Random rand = new Random();
double[] Operand1 = new double[numOfIterations];
double[] Operand2 = new double[numOfIterations];
double[] Operand3 = new double[numOfIterations];

long[] Int64Operand1 = new long[numOfIterations];
long[] Int64Operand2 = new long[numOfIterations];
long[] Int64Operand3 = new long[numOfIterations];

for (int i = 0; i < numOfIterations; i++)
{
    Operand1[i]=(rand.NextDouble() * 100);
    Operand2[i]=(rand.NextDouble() * 80);
    Operand3[i]=(rand.NextDouble() * 17);
    Int64Operand1[i] = (long)Operand1[i];
    Int64Operand2[i] = (long)Operand2[i]+1;
    Int64Operand3[i] = (long)Operand3[i]+1;
}

double[] StdResult = new double[numOfIterations];
long[] NewResult = new long[numOfIterations];

TimeSpan begin = Process.GetCurrentProcess().TotalProcessorTime;

for (int j = 0; j < numOfRepetitions; j++)
{
    for (int i = 0; i < numOfIterations; i++)
    {
        double result = Operand1[i] / Operand2[i];
        result = result / Operand3[i];
        StdResult[i]=(result);
    }

}

TimeSpan end = Process.GetCurrentProcess().TotalProcessorTime;
Console.WriteLine("Test_DivOperator Float arithmetic measured time: " + (end - begin).TotalMilliseconds + " ms.");

begin = Process.GetCurrentProcess().TotalProcessorTime;

for (int j = 0; j < numOfRepetitions; j++)
{
    for (int i = 0; i < numOfIterations; i++)
    {
        long result =    Int64Operand1[i] / Int64Operand2[i];
        result = result / Int64Operand3[i];
        NewResult[i]=(result);
    }

}

end = Process.GetCurrentProcess().TotalProcessorTime;
Console.WriteLine("Test_DivOperator Integer arithmetic measured time: " + (end - begin).TotalMilliseconds + " ms.");
like image 323
Ahmed Khalaf Avatar asked Jul 18 '15 12:07

Ahmed Khalaf


1 Answers

This isn't unexpected. 64bit integer division is just that slow.

Your processor is a Sandy Bridge, looking at the table of latencies and throughputs, 64bit idiv has a lot higher latency and much worse throughput than divsd.

Other microarchitectures show a similar difference.

Doing the actual math, 2.8548183E10ns / 500000000 = 57ns per iteration, at a frequency of 3.2GHz that's about 183 cycles, there are two divisions and some additional overhead so that is not weird.

For doubles, it works out to 32ns, 102 cycles, actually more than I would have expected.

like image 142
harold Avatar answered Nov 14 '22 23:11

harold