So I have this 2 methods which suppose to multiply a 1000 items long array of integers by 2. The first method:
[MethodImpl(MethodImplOptions.NoOptimization)]
Power(int[] arr)
{
for (int i = 0; i < arr.Length; i++)
{
arr[i] = arr[i] + arr[i];
}
}
The second method:
[MethodImpl(MethodImplOptions.NoOptimization)]
PowerNoLoop(int[] arr)
{
int i = 0;
arr[i] = arr[i] + arr[i];
i++;
arr[i] = arr[i] + arr[i];
i++;
arr[i] = arr[i] + arr[i];
i++;
............1000 Times........
arr[i] = arr[i] + arr[i];
}
Note that I use this code only for performance research and that's why it looks so disgusting.
The surprising result is that Power
is faster by almost 50% than PowerNoLoop
even though I have checked the decompiled IL
source of both of them and the content of the for
loop is exactly the same as each line in PowerNoLoop
.
How can it be?
A sample measurement from my machine, running the test 10 times, PowerNoLoop
is first:
00:00:00.0277138 00:00:00.0001553
00:00:00.0000142 00:00:00.0000057
00:00:00.0000106 00:00:00.0000053
00:00:00.0000084 00:00:00.0000053
00:00:00.0000080 00:00:00.0000053
00:00:00.0000075 00:00:00.0000053
00:00:00.0000080 00:00:00.0000057
00:00:00.0000080 00:00:00.0000053
00:00:00.0000080 00:00:00.0000053
00:00:00.0000075 00:00:00.0000053
Yes, about 50% slower. Notable is the jitter overhead in the first pass through the test, obviously it burns a lot more core trying to get that huge method compiled. Keep in mind that the measurement is vastly different when you don't disable the optimizer, the no-loop version is then ~800% slower.
First place to always look for an explanation is the generated machine code, you can see it with Debug > Windows > Disassembly. The primary troublespot is the prologue of the PowerNoLoop()
method. Looks like this in x86 code:
067E0048 push ebp ; setup stack frame
067E0049 mov ebp,esp
067E004B push edi ; preserve registers
067E004C push esi
067E004D sub esp,0FA8h ; stack frame size = 4008 bytes
067E0053 mov esi,ecx
067E0055 lea edi,[ebp-0ACCh] ; temp2 variables
067E005B mov ecx,2B1h ; initialize 2756 bytes
067E0060 xor eax,eax ; set them to 0
067E0062 rep stos dword ptr es:[edi]
Note the very large stack size, 4008 bytes. Far too much for a method with only one local variable, it should only require 8 bytes. The extra 4000 of them are temporary variables, I named them temp2
. They are initialized to 0 by the rep stos
instruction, that takes a while. I can't explain 2756.
The individual adds are a very plodding affair in the non-optimized code. I'll spare you the machine code dump and write it in equivalent C# code:
if (i >= arr.Length) goto throwOutOfBoundsException
var temp1 = arr[i];
if (i >= arr.Length) goto throwOutOfBoundsException
var temp2 = temp1 + arr[i];
if (i >= arr.Length) goto throwOutOfBoundsException
arr[i] = temp2
Repeated over and over again, a thousand times total. The temp2
variable is the troublemaker, there's one each for each individual statement. Thus adding 4000 bytes to the stack frame size. If anybody has a guess at 2756 then I'd love to hear it in a comment.
Having to set them all to 0 before the method can start running is, roughly, what produces the 50% slow-down. There is probably some instruction fetch and decode overhead as well, it can't be isolated easily from the measurement.
Notable as well is that they are not eliminated when you remove the [MethodImpl] attribute and allow the optimizer to do its job. The method is in fact not optimized at all, surely because it doesn't want to tackle such a large chunk of code.
Conclusion you should draw is to always leave it up to the jitter optimizer to unroll loops for you. It knows better.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With