Apparently after reading the old title that was
Why do questions like
is ++i fster than i+=1even exist
people didn't bother to read the question itself thoroughly.
The question was not about people's reasons for asking that! It was about why would a compiler ever make a difference between ++i and i+=1, and are there any possible scenarios where that would make sense. While I appreciate all your witty and profound comments, my question was not about it.
Well, alright, let me try to put the question it in another way, I hope my English is good enough and I can express myself without being misunderstood this time, so please read it. Let's say someone read this in a 10-years-old book:
Using ++i over i=i+1 gives you a performance advantage.
I'm not keen on this particular example, rather talking more or less generally.
Obviously, when the author was writing the book, it made sense to him, he didn't just make it up. We know that modern compilers do not care about whether you use ++i, i+=1 or i = i + 1, the code will be optimized and we will have the same asm output.
This seems quite logical: if two operations do the same thing, and have the same result, there is no reason to compile ++i into one thing, and i+=1 into another thing.
But since the book author wrote it, he had seen the difference! It means that some compiler would actually produce different output for those two lines. It means that the guys that made the compiler had some reasons for treating ++i and i+=1 differently. My question is why would they ever do so?
Is it just because it was hard/impossible to make compilers advanced enough to perform such optimizations those days? Or maybe on some very specific platforms/hardware/in some special scenario it actually makes sense to make a difference between ++i and i+=1 and other stuff of that kind? Or maybe it depends on the variable type? Or were the compiler devs just lazy?
Imagine a non-optimizing compiler. It really doesn't care whether ++i is equivalent to i+=1 or not, it just emits the first thing it can think of that works. It knows that the CPU has an instruction for addition, and it knows that the CPU has an instruction to increment an integer. So assuming i has type int, then for ++i it emits something like:
inc <wherever_i_is>
For i+=1, it emits something like:
load the constant 1 into a register
add <wherever_i_is> to that register
store that register to <wherever_i_is>
In order to determine that the latter code "should" be the same as the former, the compiler has to notice that the constant being added is 1, rather than 2 or 1007. That takes dedicated code in the compiler, the standard doesn't require it, and not every compiler has always done it.
So your question amounts to, "why would a compiler ever be dumber than me, since I've spotted this equivalence and it hasn't?". To which the answer is that modern compilers are smarter than you a lot of the time, but not always and it wasn't always the case.
since the book author wrote it, he had seen the difference
Not necessarily. If you see a pronouncement about what's "faster", sometimes the author of the book is dumber than both you and the compiler. Sometimes he's smart, but he cleverly formed his rules of thumb under conditions that no longer apply. Sometimes he has speculated about the existence of a compiler as dumb as the one I described above, without actually checking whether any compiler that you'd ever actually use, was really that dumb. Like I just did ;-)
Btw, 10 years ago is way too recent for a decent compiler with optimization enabled, to not make this particular optimization. The exact timescale probably isn't relevant to your question, but if an author wrote that and their excuse was "that was way back in 2002", then personally I wouldn't accept it. The statement wasn't any more correct then than it is now. If they said 1992 then OK, personally I don't know what compilers were like then, I couldn't contradict them. If they said 1982 then I'd still be suspicious (after all, C++ had been invented then. Much of its design relies on an optimizing compiler in order to avoid a hefty lot of wasteful work at runtime, but I'll grant that the biggest user of that fact is the template containers/algorithms, which didn't exist in 1982). If they said 1972, I'd probably just believe them. There certainly was a period in which C compilers were glorified assemblers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With