I remember years ago hearing that it is more efficient to have loops decrementing instead of incrementing especially when programming microprocessors. Is this true, and if so, what are the reasons?
One thing that occurs off the bat is that the end condition on a decrementing loop is liable to be quicker. If you are looping up to a certian value, a comparison with that value is going to be required every iteration. However, if you are looping down to zero, then the decrement itself on most processors will set the zero flag if the value being decremented hits zero, so no extra comparison operation will be required.
Small potatoes I realize, but in a large tight inner loop it might matter a great deal.
In c# it makes no difference to the efficiency. The only reason to have decrementing loops is if you are looping through a collection and removing items as you go.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With