We all know that the logical &&
operator short circuits if the left operand is false
, because we know that if one operand is false
, then the result is also false
.
Why doesn't the bitwise &
operator also short-circuit? If the left operand is 0
, then we know that the result is also 0
. Every language I've tested this in (C, Javascript, C#) evaluates both operands instead of stopping after the first.
Is there any reason why it would be a bad idea the let the &
operator short-circuit? If not, why don't most languages make it short-cicuit? It seems like an obvious optimization.
I'd guess it's because a bitwise and
in the source language typically gets translated fairly directly to a bitwise and
instruction to be executed by the processor. That, in turn, is implemented as a group of the proper number of and
gates in the hardware.
I don't see this as optimizing much of anything in most cases. Evaluating the second operand will normally cost less than testing to see whether you should evaluate it.
Short-circuiting is not an optimization device. It is a control flow device. If you fail to short-circuit p != NULL && *p != 0
, you will not get a marginally slower program, you will get a crashing program.
This kind of short-circuiting almost never makes sense for bitwise operators, and is more expensive than normal non-short-circuiting operator.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With