I am somewhat curious about creating a macro to generate a bit mask for a device register, up to 64bits. Such that BIT_MASK(31)
produces 0xffffffff
.
However, several C examples do not work as thought, as I get 0x7fffffff
instead. It is as-if the compiler is assuming I want signed output, not unsigned. So I tried 32
, and noticed that the value wraps back around to 0. This is because of C standards stating that if the shift value is greater than or equal to the number of bits in the operand to be shifted, then the result is undefined. That makes sense.
But, given the following program, bits2.c
:
#include <stdio.h>
#define BIT_MASK(foo) ((unsigned int)(1 << foo) - 1)
int main()
{
unsigned int foo;
char *s = "32";
foo = atoi(s);
printf("%d %.8x\n", foo, BIT_MASK(foo));
foo = 32;
printf("%d %.8x\n", foo, BIT_MASK(foo));
return (0);
}
If I compile with gcc -O2 bits2.c -o bits2
, and run it on a Linux/x86_64 machine, I get the following:
32 00000000
32 ffffffff
If I take the same code and compile it on a Linux/MIPS (big-endian) machine, I get this:
32 00000000
32 00000000
On the x86_64 machine, if I use gcc -O0 bits2.c -o bits2
, then I get:
32 00000000
32 00000000
If I tweak BIT_MASK
to ((unsigned int)(1UL << foo) - 1)
, then the output is 32 00000000
for both forms, regardless of gcc's optimization level.
So it appears that on x86_64, gcc is optimizing something incorrectly OR the undefined nature of left-shifting 32 bits on a 32-bit number is being determined by the hardware of each platform.
Given all of the above, is it possible to programatically create a C macro that creates a bit mask from either a single bit or a range of bits?
I.e.:
BIT_MASK(6) = 0x40
BIT_FIELD_MASK(8, 12) = 0x1f00
Assume BIT_MASK
and BIT_FIELD_MASK
operate from a 0-index (0-31). BIT_FIELD_MASK
is to create a mask from a bit range, i.e., 8:12
.
Bit masks are used to access specific bits in a byte of data. This is often useful as a method of iteration, for example when sending a byte of data serially out a single pin. In this example the pin needs to change its state from high to low for each bit in the byte to be transmitted.
The element of the mask can be either set or not set (i.e. 0 or 1). This denotes the availability of the chosen element in the bitmask. For example, an element i is available in the subset if the ith bit of mask is set. For the N element set, there can be a 2N mask each corresponding to a subset.
Here is a version of the macro which will work for arbitrary positive inputs. (Negative inputs still invoke undefined behavior...)
#include <limits.h>
/* A mask with x least-significant bits set, possibly 0 or >=32 */
#define BIT_MASK(x) \
(((x) >= sizeof(unsigned) * CHAR_BIT) ?
(unsigned) -1 : (1U << (x)) - 1)
Of course, this is a somewhat dangerous macro as it evaluates its argument twice. This is a good opportunity to use a static inline
if you use GCC or target C99 in general.
static inline unsigned bit_mask(int x)
{
return (x >= sizeof(unsigned) * CHAR_BIT) ?
(unsigned) -1 : (1U << x) - 1;
}
As Mysticial noted, shifting more than 32 bits with a 32-bit integer results in implementation-defined undefined behavior. Here are three different implementations of shifting:
x << 32 == x
.x << 32 == 0
but x << 64 == x
.x << y == 0
for all y >= 32
.However, compilers are free to do whatever they want if you shift a 32-bit operand 32 bits or more, and they are even free to behave inconsistently (or make demons fly out your nose).
Implementing BIT_FIELD_MASK:
This will set bit a
through bit b
(inclusive), as long as 0 <= a <= 31
and 0 <= b <= 31
.
#define BIT_MASK(a, b) (((unsigned) -1 >> (31 - (b))) & ~((1U << (a)) - 1))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With