My question is why a>>1
shift sign bit, but not (a & 0xaaaaaaaa) >> 1
?
Code snippet
int a = 0xaaaaaaaa;
std::cout << sizeof(a) << std::endl;
getBits(a);
std::cout << sizeof(a>>1) << std::endl;
getBits(a >> 1);
std::cout << sizeof(a & 0xaaaaaaaa) << std::endl;
getBits(a & 0xaaaaaaaa);
std::cout << sizeof((a & 0xaaaaaaaa)>>1) << std::endl;
getBits((a & 0xaaaaaaaa) >> 1);
result
4
10101010101010101010101010101010
4
11010101010101010101010101010101
4
10101010101010101010101010101010
4
01010101010101010101010101010101
a >> 1
is boring. It's simply implementation defined for a signed
type for negative a
.
(a & 0xaaaaaaaa) >> 1
is more interesting. For the likely case of your having a 32 bit int
(among others), 0xaaaaaaaa
is an unsigned
literal (obscure rule of a hexadecimal literal). So due to C++ type promotion rules a
is converted to an unsigned
type too, and the type of the expression a & 0xaaaaaaaa
is therefore unsigned
.
Makes a nice question for the pub quiz.
Reference: http://en.cppreference.com/w/cpp/language/integer_literal, especially the "The type of the literal" table.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With