int mystery(int x, int n)
{
return (x + (x>>31 & ((1 << n) + ~0))) >> n;
}
I have been trying to figure out how this code works. This is what I have so far:
It divides by 2^n with correct rounding (round towards zero) so that the expression is equivalent to:
y = x / (1 << n);
If you take a naïve approach to division by 2^n, i.e.
y = x >> n;
you get incorrect rounding for x < 0.
This part of the expression: (x>>31 & ((1 << n) + ~0))
is equal to zero for x >= 0, but for x < 0 it adjusts the result appropriately by adding 2^n - 1.
Note that strictly speaking the expression relies on implementation-specific behaviour, as it assumes that right shifting a signed integer preserves the sign bit. While this is true for most compilers and platforms, it can not be guaranteed, so the expression is not 100% safe or portable.
Note also that the expression has a hard-coded assumption that int is 32 bits, which also makes it non-portable. A more portable version which works with any size of int would be:
return (x + (x >> (sizeof(int) * CHAR_BIT - 1) & ((1 << n) + ~0))) >> n;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With