Possible Duplicate:
Compute fast log base 2 ceiling
What is the fastest possible way to find out how many binary digits a particular integer has when it is converted from decimal to binary in C/C++?
Ex. 47(10) = 101111(2)
So 47 has 6 digits represented in binary.
If the integer is at least 1
, the required bits would be:
floor(log2(x)) + 1
For a quick fun way of doing this without needing to call math functions, check this one out:
for (digits = 0; val > 0; val >>= 1)
digits++;
As a bonus, this should cook down to a memory load and 2 registers in use, for extra whiz-bang.
If you're looking for the "fastest" way in terms of performance, you will need to resort to platform-specific methods.
Some architectures actually have an instruction that does that.
On x86, you have the bsr
instruction.
In MSVC, it is accessible as:
inline int bitlength(unsigned long x){
if (x == 0)
return 0;
unsigned long index;
_BitScanReverse(&index,x);
return (int)(index + 1);
}
GCC has the __builtin_clz()
intrinsic - which does something similar.
The fastest solution that's presented at my favorite collection of bit twiddling hacks is Find the log base 2 of an N-bit integer in O(lg(N)) operations with multiply and lookup. It requires 13 instructions to find the highest set bit in a number.
uint32_t v; // find the log base 2 of 32-bit v
int r; // result goes here
static const int MultiplyDeBruijnBitPosition[32] =
{
0, 9, 1, 10, 13, 21, 2, 29, 11, 14, 16, 18, 22, 25, 3, 30,
8, 12, 20, 28, 15, 17, 24, 7, 19, 27, 23, 6, 26, 5, 4, 31
};
v |= v >> 1; // first round down to one less than a power of 2
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
r = MultiplyDeBruijnBitPosition[(uint32_t)(v * 0x07C4ACDDU) >> 27];
The traditional way
int count = 32;
for(int i = 1 << 31; i != 0; i >>= 1, count--)
if((number & i) != 0) return count;
You can get more fancy with optimization.
EDIT 2 Here's the fastest code I could think of without the use of Bit Scan Reverse opcode. You could use a bigger (256 entry) LUT and remove the last IF statement. In my testing this was faster than the repeated OR-SHIFT then LUT method described in another post.
int[] Log2_LUT = new int[16]{0, 1, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4};
int Log2 (number) {
int count = 0;
if((number & 0xFFFF0000) != 0) {
number >>= 16;
count += 16;
}
if((number & 0x0000FF00) != 0) {
number >>= 8;
count += 8
}
if((number & 0x000000F0) != 0) {
number >>= 4;
count += 4;
}
return count + Log2_LUT[number];
}
Or if your in x86 or x86-64 bit architecture you can use the BSR (Bit Scan Reverse) opcode. You can find the c++ intrinsic for it http://msdn.microsoft.com/en-us/library/fbxyd7zd%28v=vs.80%29.aspx
Also you question is similar to this one What is the fastest way to calculate the number of bits needed to store a number
EDIT Why the log2 answers are not optimal... While mathematically correct, complex floating point operations (sine, cosine, tan, log) are the slowest performing operations on modern computers. This is compounded by having to convert integer to a float and having to ceiling/floor it as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With