Some iterators are faster. I found this out because I heard from Bob Tabor on Channel 9 to never copy and paste.
I was in the habit of doing something like this to set array values:
testArray[0] = 0;
testArray[1] = 1;
This is a simplified example, but in order to not copy and paste, or not type things out again, I suppose I should use a loop. But I had this nagging feeling that a loop would be slower than simply listing the commands, and it looks like I'm right: listing things is a lot faster. The speed, fastest to slowest, in most of my trials, was the list, the do loop, the for loop, and then the while loop.
Why is listing things faster than using an iterator, and why are iterators different speeds?
Please help me out if I haven't used these iterators in the most efficient way possible.
Here are my results (for a 2 int array) and my code is below (for a 4 int array). I tried this a few time on my Windows 7 64 bit.
Either I'm not good at iterating, or using iterators isn't as great as its made out to be. Please let me know which it is. Thanks so much.
int trials = 0;
TimeSpan listTimer = new TimeSpan(0, 0, 0, 0);
TimeSpan forTimer = new TimeSpan(0, 0, 0, 0);
TimeSpan doTimer = new TimeSpan(0, 0, 0, 0);
TimeSpan whileTimer = new TimeSpan(0, 0, 0, 0);
Stopwatch stopWatch = new Stopwatch();
long numberOfIterations = 100000000;
int numElements = 4;
int[] testArray = new int[numElements];
testArray[0] = 0;
testArray[1] = 1;
testArray[2] = 2;
testArray[3] = 3;
// List them
stopWatch.Start();
for (int x = 0; x < numberOfIterations; x++)
{
testArray[0] = 0;
testArray[1] = 1;
testArray[2] = 2;
testArray[3] = 3;
}
stopWatch.Stop();
listTimer += stopWatch.Elapsed;
Console.WriteLine(stopWatch.Elapsed);
stopWatch.Reset();
// for them
stopWatch.Start();
int q;
for (int x = 0; x < numberOfIterations; x++)
{
for (q = 0; q < numElements; q++)
testArray[q] = q;
}
stopWatch.Stop();
forTimer += stopWatch.Elapsed;
Console.WriteLine(stopWatch.Elapsed);
stopWatch.Reset();
// do them
stopWatch.Start();
int r;
for (int x = 0; x < numberOfIterations; x++)
{
r = 0;
do
{
testArray[r] = r;
r++;
} while (r < numElements);
}
stopWatch.Stop();
doTimer += stopWatch.Elapsed;
Console.WriteLine(stopWatch.Elapsed);
stopWatch.Reset();
// while
stopWatch.Start();
int s;
for (int x = 0; x < numberOfIterations; x++)
{
s = 0;
while (s < numElements)
{
testArray[s] = s;
s++;
}
}
stopWatch.Stop();
whileTimer += stopWatch.Elapsed;
Console.WriteLine(stopWatch.Elapsed);
stopWatch.Reset();
Console.WriteLine("listTimer");
Console.WriteLine(listTimer);
Console.WriteLine("forTimer");
Console.WriteLine(forTimer);
Console.WriteLine("doTimer");
Console.WriteLine(doTimer);
Console.WriteLine("whileTimer");
Console.WriteLine(whileTimer);
Console.WriteLine("Enter any key to try again the program");
Console.ReadLine();
trials++;
When I tried a 4 element array the results seemed to get a little more pronounced.
I thought it would be only fair if I made the value for the listThem group assigned via a variable like the other trials. It did make the listThem group a little slower, but it was still the fastest. Here are the results after a few tries:
And here's how I implemented the list:
int w = 0;
for (int x = 0; x < numberOfIterations; x++)
{
testArray[w] = w;
w++;
testArray[w] = w;
w++;
testArray[w] = w;
w++;
testArray[w] = w;
w = 0;
}
I know these results are probably implementation specific, but you would think Microsoft would warn us of the advantages and disadvantages of each loop when it comes to speed. What do you think? Thanks.
Update: As per the comments I published the code and the list is still faster then the loops, but the loops seem closer in performance. The loops are from fastest to slowest: for, while, then do. This is a little different, so my guess is do and while are essentially the same speed, and the for loop is about half a percent faster than the do and while loops, at least on my machine. Here are the results of a few trials:
Iterator and for-each loop are faster than simple for loop for collections with no random access, while in collections which allows random access there is no performance change with for-each loop/for loop/iterator.
Iterators aren't intended to be faster, they're intended to be as fast, which they are, and significantly more generic.
Iterators. To determine whether to use loops or iterators, you need to know which implementation is faster: the version of the search function with an explicit for loop or the version with iterators. The iterator version was slightly faster!
Range-for is as fast as possible since it caches the end iterator[citationprovided], uses pre-increment and only dereferences the iterator once. Then, yes, range-for may be slightly faster, since it's also easier to write there's no reason not to use it (when appropriate).
Some iterators are faster.
Of course, some iterators do different things. Different code doing different things will run at different speeds.
I was in the habit of doing something like this to set array values:
Firstly, is this really the time you need to make a saving? From your measurements (which are pointless if it was a debug build) it would seem that your extra code brings you a saving of about 10 nanoseconds. If everyone in the world used your application once, the total amount of time you saved all your users will still be less than the extra time just spent typing. None of them will at any point think "well, there's ten nanoseconds I'll never get back".
but you would think Microsoft would warn us of the advantages and disadvantages of each loop when it comes to speed
No, I really wouldn't.
Especially when you generalise further. For one thing, with a larger loop, the equivalent unwound code could quite possibly be slower, due to the fact that the loop might fit into an instruction-line cache, while the unwound code wouldn't.
For another, iterating and enumerating (which on average tends to be slower again than iterating, but not by much either) are much more flexible. They will lead to smaller, more idiomatic code. The are applicable to a a lot of cases where the sort of unwinding you have either isn't applicable or isn't easily applicable (so you lose any savings you expect due to having to do something convoluted). They have a smaller scope for error simply because they have a smaller scope for anything.
So firstly MS or anyone else can't advise to always fill your code with pages of repetitive copy-pasted statements to save a few nanoseconds, because it isn't always going to be the fastest approach anyway, and secondly they wouldn't do so because of all the other ways the other code is superior.
Now, there are indeed cases where saving a few nanoseconds is really important, which is when we are doing something several billion times. If a chip manufacturer knocks a few nanoseconds of the time a basic instruction takes, it'll add up to a real win.
In terms of the sort of code we might do in C#, we might do an optimisation that unwinds, though it's rarely the place we'll care about running time.
Let's say I need to do something x
times.
First, I do the obvious:
for(int i = 0; i != x; ++i)
DoSomething();
Let's say my application as a whole is not as fast as I need. The first thing I do is consider just what "fast as I need" means, because unless this is coding for fun (hey, ridiculous efforts in pursuit of speed can be fun), that is the first thing I want to know. I get an answer to that, or more likely several answers (minimum acceptable, minimum target, ideal, and marketing-get-to-brag-about-how-fast-this-is could be different levels).
Then I find which bits of the actual code time is being spent in. There is no point optimising something that takes 10ns in the lifetime of an application when another piece that takes 400ms gets called by an outer loop 1,000 times whenever the user clicks a button, causing a 4 second delay.
Then I reconsider my whole approach - is "do this X times" (which is inherently O(x) in time complexity), the only way to attain my actual goal, or could I do something completely different that was perhaps O(ln x) (that is, rather than taking time proportional to X it takes time proportional to the logarithm of X). Could I maybe cache some results so for a greater initial running time I get to save a few milliseconds many thousands of times?
Then I'll see if I can improve the speed of DoSomething()
. 99.9% of the time, I'd do better there than in changing the loop, because it's likely taking more time than the couple of nanoseconds the looping itself is taking.
And I might do some really horrible unidiomatic and confusing things in DoSomething()
that I'd normally consider to be bad code, because I'll know that this is the spot where it's worth it (and I'll comment to not only explain how this more confusing code works, but precisely why it was done that way). And I'll measure these changes, and possibly in a few years I'll measure them again because the fastest approach to something with the current framework on current CPUs may not be the fastest approach on .NET 6.5 now that we've move the application onto a cool new server with the latest chips Intel brought out in 2017.
Quite possibly I'll have hand-inlined DoSomething()
straight into the loop, since the cost of calling a function is almost certainly greater than the cost of the approach to looping (but not completely certainly, there can be surprises with just what gets inlined by the jitter and what effects it has).
And maybe, just maybe I'll replace the actual loop with something like:
if(x > 0)
switch(x & 7)
{
case 0:
DoSomething();
goto case 7;
case 7:
DoSomething();
goto case 6;
case 6:
DoSomething();
goto case 5;
case 5:
DoSomething();
goto case 4;
case 4:
DoSomething();
goto case 3;
case 3:
DoSomething();
goto case 2;
case 2:
DoSomething();
goto case 1;
case 1:
DoSomething();
if((x -= 8) > 0)
goto case 0;
break;
}
Because this is a way to combine the performance benefits loops have in not taking up massive amounts of instruction memory, with the performance benefits you've found that unwinding a loop by hand brings for short loops; it pretty much uses your approach for groups of 8 items, and loops through chunks of 8.
Why 8? because it's a reasonable starting point; I'd actually measure different sizes if this was so important a hot-spot in my code. The only time I've ever done this in real (not just for fun) .NET code I ended up doing chunks of 16.
And that only ever time, the instruction called on each iteration was very short (12 IL instructions that would correspond with the C# code *x++ = *y++
) and it was in code designed for the very purpose of letting other code do something fast and the entire code path was one I avoid hitting into in most circumstances, with more work going into figuring out when I was better of using or avoiding it, than in making that bit as fast as possible.
The rest of the time, either unwinding doesn't save much (if anything), or it doesn't save where it matters, or there are other much more pressing optimisations to do before even considering it.
I certainly wouldn't start out with such code; that would be the very definition of a premature optimisation.
As a rule, iterating is fast. It is known to other coders. It is known to the jitter (which can apply some optimisations in some cases). It is understandable. It is short. It is flexible. As a rule using a foreach
is also fast, albeit not as fast as iterating, and it is even more flexible (there are all manner of ways one can use IEnumerable
implementations with great efficiency).
Repetitive code is more brittle, more likely to hide a silly mistake (we all write bugs that make us think "that was so silly it almost isn't good enough to count as a bug", these are easy to fix, as long as you can find them). It is harder to maintain and is more likely to turn into something even harder to maintain as the project goes on. It is harder to see the big picture of, and it's in the big picture that the biggest performance improvements can be made.
In all, the reason the guy on the Channel 9 episode didn't warn you that something could make your program maybe 10ns slower, in certain circumstances, is he would have been laughed at.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With