In the book "C Programming Language" by K&R, there is a bit count function:
int bitsCount(unsigned x)
{
int b;
for (b = 0; x != 0; x >>= 1)
if (x & 01)
b++;
return b;
}
My question is why they use x & 01 but not x & 1 or x & 00000001? Isn't that 01 means octal 1?
C++ bitset count() function is used to count the number of set bits in the binary representation of a number.
Java Integer bitCount() method The bitCount() method of Integer class of java. lang package returns the count of the number of one-bits in the two's complement binary representation of an int value. This function is sometimes referred to as the population count.
Semantically, you're correct, it doesn't matter. x & 01
, x & 1
, x & 0x1
, etc will all do the exact same thing (and in every sane compiler, generate the exact same code). What you're seeing here is an author's convention, once pretty standard (but never universal), now much less so. The use of octal in this case is to make it clear that bitwise operations are taking place; I'd wager that the author defines flag constants (intended to be bitwise-or'd together) in octal as well. This is because it's much easier to reason about, say, 010 & 017, then to reason about 8 & 15, as you can think about it one digit at a time. Today, I find it much more common to use hex, for exactly the same reason (bitwise operations apply one digit at a time). The advantage of hex over octal is that hex digits align nicely to bytes, and I'd expect to see most bitwise operations written with hex constants in modern code (although trivial constants < 10 I tend to write as a single decimal digit; so I'd personally use x & 1
rather than x & 0x1
in this context).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With