Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does this method count the number of 1s in binary representation? [duplicate]

Possible Duplicate:
n & (n-1) what does this expression do?

Consider the following algorithm:

int count(int num)
{
  int ones = 0;
  while(num)
  {
    ++ones; 
    num &= num - 1;
  }

  return ones;
}

What is the significance of num & (num-1)? How does it work?

like image 865
user1526667 Avatar asked Aug 04 '12 17:08

user1526667


People also ask

How many 1s are present in the binary representation of?

Detailed Solution We observe that this is hexadecimal to decimal number system conversion, i.e. the number given in Hexa whose decimal equivalent is given, i.e. On counting this, the number of 1's is 8. ∴ The total number of 1's is 8.


2 Answers

num &= num - 1;

clears the least significant bit set in num.

This algorithm counts the bits set by clearing them and incrementing a counter until they're all gone.

To understand why it clears the least significant bit, you need to think about what decrementing does to the bits and of course understand what the & operation does.

Subtracting in binary works just as the process we were all taught in decimal as children. You work from right (least significant) to left, simply subtracting individual digits when possible, and "borrowing" from the next digit when necessary.

When subtracting 1 from a binary number ending in a set of zeros, this "borrowing" and subtracting turns all the zeros in lower positions than the rightmost 1 to 1's and turns the rightmost 1 to a zero (because it was borrowed).

Then applying the & operator leaves all the lesser digits zero because they're zero in num, and sets the least significant bit of num to zero because it's zero in num-1.

Both of these operations leave the more significant digits unchanged.

Here's a good list of bit twiddling hacks including this one, which is due to Brian Kernighan.

like image 182
Don Roby Avatar answered Sep 30 '22 19:09

Don Roby


Here's a more detailed (but not very well written!) answer.

There are two cases: either the least significant bit is set, then "num-1" unsets it. Or it's not set, then num-1 turns all trailing zeroes into 1, and the least significant 1 to a 0, the rest of the bits are not changed. When you "and", all unchanged bits are the same, the least significant 1 which is anded with a 0 turns into a 0 and the others remaining bits are zeroes. This is illustrated here:

num             =      1000110111[1]0000
num - 1         =      1000110111[0]1111
num & (num - 1) =      1000110111[0]0000

I would point out that there is often an assembly operation to count the number of ones in a single cycle. The operation is called "popcount" and for instance in GCC, it can be accessed using "__builtin_popcount", see this link for details.

like image 28
Jérémie Avatar answered Sep 30 '22 19:09

Jérémie