std::bit_width
finds minimum bits required to represent an integral number x
as 1+floor(log(x))
Why does std::bit_width
return 0 for the value 0? Shouldn't it return 1, Since the number of bits required to represent 0 is 1?
Also, I think the 1
in the formula is an offset.
return 0: A return 0 means that the program will execute successfully and did what it was intended to do. return 1: A return 1 means that there is some error while executing the program and it is not performing what it was intended to do. If exit with a status other than 0 then, print an error message to stderr.
return 0 in the main function means that the program executed successfully. return 1 in the main function means that the program does not execute successfully and there is some error. return 0 means that the user-defined function is returning false. return 1 means that the user-defined function is returning true.
Every bit in the value representation of the returned To object is equal to the corresponding bit in the object representation of from. The values of padding bits in the returned To object are unspecified.
These status codes are just used as a convention for a long time in C language because the language does not support the objects and classes, and exceptions. return 0: A return 0 means that the program will execute successfully and did what it was intended to do.
There is a strange bit of history to bit_width
.
The function that would eventually become known as bit_width
started life as log2
, as part of a proposal adding integer power-of-two functions. log2
was specified to produce UB when passed 0.
Because that's how logarithms work.
But then, things changed. The function later became log2p1
, and for reasons that are not specified was given a wider contract ("wide contract" in C++ parlance means that more stuff is considered valid input). Specifically, 0 is valid input, and yields the value of 0.
Which is not how logarithms work, but whatever.
As C++20 neared standardization, a name conflict was discovered (PDF). The name log2p1
happens to correspond to the name of an IEEE-754 algorithm, but it's a radically different one. Also, functions in other languages with similar inputs and results use a name like bit_length
. So it was renamed to bit_width
.
And since it's not pretending to do a logarithm anymore, the behavior at 0 can be whatever we want.
Indeed, the Python function int.bit_length
has the exact same behavior. Leading zeros are not considered part of the bit length, and since a value of 0
contains all leading zeros...
Because mathematically it makes sense:
bit_width(x) = log2(round_up_to_nearest_integer_power_of_2(x + 1))
bit_width(0) = log2(round_up_to_nearest_integer_power_of_2(0 + 1))
= log2(1)
= 0
To elaborate what was said in the comments:
Assume "bit width" means "least number of bits required to store the (nonnegative integer) number". Intuitively we need at least log2(n)
bits rounding up, so it is a formula close to ceil(log2(n))
, so 255 would require ceil(log2(255)) = ceil(7.99..) = 8
bits, but this doesn't work for powers of 2, so we can add a fudge factor of 1 to n
to get ceil(log2(n+1))
. This happens to be mathematically equivalent to 1+floor(log2(n))
for positive n, but log2(0)
is not defined or defined as something unuseful like negative infinitiy in the floor version.
If we use the ceiling formula for 0, we get the result. You can also see I didn't write out leading zeros, and as Nicol Bolas points out, 0 is all leading zeros.
n | bin(n) | bit_width(n) |
---|---|---|
8 | 1000 | 4 |
7 | 111 | 3 |
6 | 110 | 3 |
5 | 101 | 3 |
4 | 100 | 3 |
3 | 11 | 2 |
2 | 10 | 2 |
1 | 1 | 1 |
0 | 0 |
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With