Consider the following code snippet, a traversal of a three dimensional array of singles, in terms of execution efficiency, assuming that process1()
and process2()
take identical lengths of time to execute:
float arr[mMax,nMax,oMax];
for (m = 0; m < mMax; m++)
for (n = 0; n < nMax; n++)
for (o = 0; o < oMax; o++)
{ process1(arr[m,n,o]); }
for (o = 0; o < oMax; o++)
for (n = 0; n < nMax; n++)
for (m = 0; m < mMax; m++)
{ process2(arr[m,n,o]); }
Now, it's known that C# organizes arrays in the .NET framework as row-major structures. Without any optimization I would assume that the first loop will execute much faster than the second one.
The question is: Does the CLR's JIT or the cs.exe/vb.exe compilers detect and optimize loops like this, perhaps reordering the nesting, or should I always be on my guard for potential performance hits, especially in terms of what might happen if I tried to parallelize the loops?
This is the kind of optimization you might expect in a C or C++ compiler. It is actually rather current, this exact optimization was mentioned in this video of a Build 2013 session. While targeted to C/C++ programmers, a lot what is covered there is interesting to C# programmers as well. The constraints of the memory subsystem are equally relevant. Not actually sure if the optimization made it into VS2013, iirc there was a problem with it slowing down the native compiler too much as well.
But no, the jitter optimizer works on a very tight budget. Spending too much time causes noticeable startup delays and execution pauses so it cannot afford this kind of analysis. C# programmers have to do this themselves.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With