I was working with bit shift operators (see my question Bit Array Equality) and a SO user pointed out a bug in my calculation of my shift operand--I was calculating a range of [1,32] instead of [0,31] for an int. (Hurrah for the SO community!)
In fixing the problem, I was surprised to find the following behavior:
-1 << 32 == -1
In fact, it would seem that n << s
is compiled (or interpreted by the CLR--I didn't check the IL) as n << s % bs(n)
where bs(n) = size, in bits, of n.
I would have expected:
-1 << 32 == 0
It would seem that the compiler is realizing that you are shifting beyond the size of the target and correcting your mistake.
This is purely an academic question, but does anyone know if this is defined in the spec (I could not find anything at 7.8 Shift operators), just a fortuitous fact of undefined behavior, or is there a case where this might produce a bug?
I believe that the relevant part of the spec is here:
For the predefined operators, the number of bits to shift is computed as follows:
When the type of x is int or uint, the shift count is given by the low-order five bits of count. In other words, the shift count is computed from count & 0x1F.
When the type of x is long or ulong, the shift count is given by the low-order six bits of count. In other words, the shift count is computed from count & 0x3F.
If the resulting shift count is zero, the shift operators simply return the value of x.
The value 32
is 0x20
. The expression 0x20 & 0x1F
evaluates to 0
. Therefore, the shift count is zero, and no shift is done; the expression -1 << 32
(or any x << 32
) just returns the original value.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With