I have seen people use addition where a bitwise OR would be more conceptually appropriate, because they believe it is faster. Is this true? If yes, do all modern compilers know this trick?
Both addition and logical OR are probably performed in a similar part of the ALU of the CPU. There is unlikely to be any measurable performance difference, but this would have be measured in your situation to be certain.
Compilers won't need to bother with this because usually the only way the compiler can know that addition and ORing will give the same result is if the operands are constants, and in that case the compiler can simply do the arithmetic at compile time and not even have to generate code for it.
"Conceptually appropriate" and "faster" are two different things. The former is semantics, while the latter often involves breaking semantics.
As for the question in the title, there's very little (if any) difference speedwise. A compiler for a CPU where this actually happens, will usually optimize it anyway -- if it doesn't cause different results, which it very well can and usually will.
Write your code correctly -- if you mean to OR, then OR. If the add-vs-OR ends up being faster, either your compiler will do it for you or you can change it later after you've decided whether the potential extra half-nanosecond per iteration is worth the cost in readability and the bugs such a change might cause.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With