The language I use is C. The type of x and n is int.
I have one line code as following
printf("x=%x,n=%d,first=%x,second=%x\n",x,n,((~(x+0xffffffff))>>n),((~x+1)>>n));
It shows the value of x,n and two methods of shifting n bits of the complement number of x. When x=0x80000000,~(x+0xffffffff)=0x8000000,~x+1=0x80000000, yet when shift these two by n bits, the results are different.
btw, if I changed 0xffffffff to ~1+1(that means ~(x+(~1+1)), the result is the same as ~x+1
I wonder why that happened. Thanks.
The now deleted answer by Pavan Manjunath had the correct answer for one case, assuming that int is as usual a 32-bit type. The integer constant
0xffffffff
has the value 2^32 - 1 and that isn't representable by an int, but it is representable as an unsigned int. So its type is unsigned int (6.4.4.1). Hence x is converted to unsigned int for the addition, and
((~(x+0xffffffff))>>n)
evaluates as
((~(0x80000000u + 0xffffffffu)) >> n)
((~0x7fffffffu) >> n)
(0x80000000u >> n)
with the value 2^(31-n) if 0 <= n < 32 (it's undefined behaviour if n is outside that range).
For the other case, ouah's answer is correct, when x = 0x80000000 is an int, ~0x8000000 = 0x7fffffff = INT_MAX and INT_MAX + 1 is undefined behaviour as signed integer overflow.
Nevertheless, a common behaviour is wrap-around, and then the result of the addition is the signed integer 0x80000000 and right-shifting of negative integers is implementation-defined behaviour (6.5.7). Common is shifting with sign-extension, which would yield the result -2^(31-n), which then is interpreted as the unsigned int with the value 2^32 - 2^(31-n) by the printf conversion specifier %x.
When x=0x80000000,~(x+0xffffffff)=0x8000000,~x+1=0x80000000,
On a system with 32-bit int (assuming x is of type int) and two's complement signed representation, this expression:
~x+1
is undefined behavior. x = 0x80000000 means ~x == 0x7FFFFFFF == INT_MAX and INT_MAX + 1 is undefined behavior. So ~x + 1 can be 0x80000000 or anything else.
This expression:
~(x+0xffffffff)
on the other hand is defined (0xffffffff is unsigned int in C) and is equal to 0x80000000. It is actually defined because 0xffffffff is unsigned int and unsigned integers never overflow in the sense of the C standard.
This means that this statement:
printf("x=%x,n=%d,first=%x,second=%x\n",x,n,((~(x+0xffffffff))>>n),((~x+1)>>n));
invokes undefined behavior and it makes no sense to compare both results.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With