Can someone explains why Brian Kernighan's algorithm takes O(log N) to count set bits (1s) in an integer. A simple implementation of this algorithm is below (in JAVA)
int count_set_bits(int n){ int count = 0; while(n != 0){ n &= (n-1); count++; } return count; }
I understand how it works by clearing the rightmost set bit one by one until it becomes 0, but I just don't know how we get O(log N).
To count in binary, convert the last 0 in any number to 1 to add 1, and change any digits following the converted 1 back to 0. If all of the digits are already 1s, add a 1 to the beginning of the number and reset all of the other digits to 1.
This algorithm goes through as many iterations as there are set bits. So if we have a 32-bit word with only the high bit set, then it will only go once through the loop. In the worst case, it will pass once per bit. An integer n
has log(n)
bits, hence the worst case is O(log(n))
. Here's your code annotated at the important bits (pun intended):
int count_set_bits(int n){ int count = 0; // count accumulates the total bits set while(n != 0){ n &= (n-1); // clear the least significant bit set count++; } }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With