From cppreference.com:
For unsigned a and for signed a with nonnegative values, the value of a >> b is the integer part of a/2b . For negative a, the value of a >> b is implementation-defined (in most implementations, this performs arithmetic right shift, so that the result remains negative).
In any case, if the value of the right operand is negative or is greater or equal to the number of bits in the promoted left operand, the behavior is undefined.
Why do we have an undefined behavior in case the right operand is greater or equal to the number of bits in the promoted left operand?
It seems to me that the result should be 0 (at least for unsigned/positive integers)...
In particular, with g++ (version 4.8.4, Ubuntu):
unsigned int x = 1;
cout << (x >> 16 >> 16) << " " << (x >> 32) << endl;
gives: 0 1
According to the C standards, signed integer overflow is undefined behaviour too. A few compilers may trap the overflow condition when compiled with some trap handling options, while a few compilers simply ignore the overflow conditions (assuming that the overflow will never happen) and generate the code accordingly.
So, in C/C++ programming, undefined behavior means when the program fails to compile, or it may execute incorrectly, either crashes or generates incorrect results, or when it may fortuitously do exactly what the programmer intended.
The right-shift operator causes the bit pattern in shift-expression to be shifted to the right by the number of positions specified by additive-expression . For unsigned numbers, the bit positions that have been vacated by the shift operation are zero-filled.
It is a bitwise operator that we use in the C language for operating on bits. The right shift operator is binary- which means that the working of this operator would require two of the operands. We represent the right shift operator using the >> sign.
One of the goals of C++ is to allow for fast, efficient code, "close to the hardware". And on most hardware, an integer right shift or left shift can be implemented by a single opcode. The trouble is, different CPUs have different behavior in this case where the shift magnitude is more than the number of bits.
So if C++ mandated a particular behavior for shift operations, when producing code for a CPU whose opcode behavior doesn't match all the Standard requirements, compilers would need to insert checks and logic to make sure the result is as defined by the Standard in all cases. This would need to happen to almost all uses of the built-in shift operators, unless an optimizer can prove the corner case won't actually happen. The added checks and logic would potentially slow down the program.
To give a specific example, x86 trims the shift count to 5 bits (6 bits for 64-bit shifts), while ARM trims the shift count to 8 bits. With current C++ standard, compilers for both CPUs can implement shifts with a single opcode.
If the C++ standard were to define the outcome of shifts by more than the operand length in a particular way, compilers targeting at least one of the CPU families (and possibly both, if the outcome required by C++ wouldn't match either hardware implementation, like the behaviour you suggest) would have to implement each shift operation using branches which would produce the required result where the CPU opcode wouldn't.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With