Since this question is about the increment operator and speed differences with prefix/postfix notation, I will describe the question very carefully lest Eric Lippert discover it and flame me!
(further info and more detail on why I am asking can be found at http://www.codeproject.com/KB/cs/FastLessCSharpIteration.aspx?msg=3899456#xx3899456xx/)
I have four snippets of code as follows:-
(1) Separate, Prefix:
for (var j = 0; j != jmax;) { total += intArray[j]; ++j; }
(2) Separate, Postfix:
for (var j = 0; j != jmax;) { total += intArray[j]; j++; }
(3) Indexer, Postfix:
for (var j = 0; j != jmax;) { total += intArray[j++]; }
(4) Indexer, Prefix:
for (var j = -1; j != last;) { total += intArray[++j]; } // last = jmax - 1
What I was trying to do was prove/disprove whether there is a performance difference between prefix and postfix notation in this context (ie a local variable so not volatile, not changeable from another thread etc.) and if there was, why that would be.
Speed testing showed that:
(1) and (2) run at the same speed as each other.
(3) and (4) run at the same speed as each other.
(3)/(4) are ~27% slower than (1)/(2).
Therefore I am concluding that there is no performance advantage of choosing prefix notation over postfix notation per se. However when the Result of the Operation is actually used, then this results in slower code than if it is simply thrown away.
I then had a look at the generated IL using Reflector and found the following:
The number of IL bytes is identical in all cases.
The .maxstack varied between 4 and 6 but I believe that is used only for verification purposes and so not relevant to performance.
(1) and (2) generated exactly the same IL so its no surprise that the timing was identical. So we can ignore (1).
(3) and (4) generated very similar code - the only relevant difference being the positioning of a dup opcode to account for the Result of the Operation. Again, no surprise about timing being identical.
So I then compared (2) and (3) to find out what could account for the difference in speed:
(2) uses a ldloc.0 op twice (once as part of the indexer and then later as part of the increment).
(3) used ldloc.0 followed immediately by a dup op.
So the relevant IL for the incrementing j for (1) (and (2)) is:
// ldloc.0 already used once for the indexer operation higher up
ldloc.0
ldc.i4.1
add
stloc.0
(3) looks like this:
ldloc.0
dup // j on the stack for the *Result of the Operation*
ldc.i4.1
add
stloc.0
(4) looks like this:
ldloc.0
ldc.i4.1
add
dup // j + 1 on the stack for the *Result of the Operation*
stloc.0
Now (finally!) to the question:
Is (2) faster because the JIT compiler recognises a pattern of ldloc.0/ldc.i4.1/add/stloc.0
as simply incrementing a local variable by 1 and optimize it?
(and the presence of a dup
in (3) and (4) break that pattern and so the optimization is missed)
And a supplementary:
If this is true then, for (3) at least, wouldn't replacing the dup
with another ldloc.0
reintroduce that pattern?
C programming language is a machine-independent programming language that is mainly used to create many types of applications and operating systems such as Windows, and other complicated programs such as the Oracle database, Git, Python interpreter, and games and is considered a programming foundation in the process of ...
Originally Answered: What is the full form of C ? C - Compiler . C is a general-purpose, high-level language that was originally developed by Dennis M. Ritchie to develop the UNIX operating system at Bell Labs. C was originally first implemented on the DEC PDP-11 computer in 1972.
The letter c was applied by French orthographists in the 12th century to represent the sound ts in English, and this sound developed into the simpler sibilant s.
What is C? C is a general-purpose programming language created by Dennis Ritchie at the Bell Laboratories in 1972. It is a very popular language, despite being old. C is strongly associated with UNIX, as it was developed to write the UNIX operating system.
OK after much research (sad I know!), I think have answered my own question:
The answer is Maybe. Apparently the JIT compilers do look for patterns (see http://blogs.msdn.com/b/clrcodegeneration/archive/2009/08/13/array-bounds-check-elimination-in-the-clr.aspx) to decide when and how array bounds checking can be optimized but whether it is the same pattern I was guessing at or not I don't know.
In this case, it is a moot point because the relative speed increase of (2) was due to something more than that. Turns out that the x64 JIT compiler is clever enough to work out whether an array length is constant (and seemingly also a multiple of the number of unrolls in a loop): So the code was only bounds checking at the end of each iteration and the each unroll became just:-
total += intArray[j]; j++;
00000081 8B 44 0B 10 mov eax,dword ptr [rbx+rcx+10h]
00000085 03 F0 add esi,eax
I proved this by changing the app to let the array size be specified on the command line and seeing the different assembler output.
Other things discovered during this excercise:-
Interesting results. What I would do is:
And then you'll know whether the jitter is doing a better job with one than the other. The jitter might, for example, be realizing that in one case it can remove array bounds checks, but not realizing that in the other case. I don't know; I'm not an expert on the jitter.
The reason for all the rigamarole is because the jitter may generate different code when the debugger is attached. If you want to know what it does under normal circumstances then you have to make sure the code gets jitted under normal, non-debugger circumstances.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With