The size of an integer type (or any type) in units of char
/bytes is easily computed as sizeof(type)
. A common idiom is to multiply by CHAR_BIT
to find the number of bits occupied by the type, but on implementations with padding bits, this will not be equal to the width in value bits. Worse yet, code like:
x>>CHAR_BIT*sizeof(type)-1
may actually have undefined behavior if CHAR_BIT*sizeof(type)
is greater than the actual width of type
.
For simplicity, let's assume our types are unsigned. Then the width of type
is ceil(log2((type)-1)
. Is there any way to compute this value as a constant expression?
There is a function-like macro that can determine the value bits of an integer type, but only if you already know that type's maximum value. Whether or not you'll get a compile-time constant depends on your compiler but I would guess in most cases the answer is yes.
Credit to Hallvard B. Furuseth for his IMAX_BITS() function-like macro that he posted in reply to a question on comp.lang.c
/* Number of bits in inttype_MAX, or in any (1<<b)-1 where 0 <= b < 3E+10 */
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
IMAX_BITS(INT_MAX) computes the number of bits in an int, and IMAX_BITS((unsigned_type)-1) computes the number of bits in an unsigned_type. Until someone implements 4-gigabyte integers, anyway:-)
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
So for example IMAX_BITS(INT64_MAX)
will create a compile-time constant of 63. However, in this example, we are dealing with a signed type so you must add 1 to account for the sign bit if you want the actual width of an int64_t, which is of course 64.
In a separate comp.lang.c discussion a user named blargg gives a breakdown of how the macro works:
Re: using pre-processor to count bits in integer types...
Note that the macro only works with 2^n-1 values (ie all 1s in binary), as would be expected with any MAX value. Also note that while it is easy to get a compile-time constant for the maximum value of an unsigned integer type (IMAX_BITS((unsigned type)-1)
), at the time of this writing I don't know any way to do the same thing for a signed integer type without invoking implementation-defined behavior. If I ever find out I'll answer my own related SO question, here:
C question: off_t (and other signed integer types) minimum and maximum values - Stack Overflow
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With