Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When should I use UINT32_C(), INT32_C(),... macros in C?

I switched to fixed-length integer types in my projects mainly because they help me think about integer sizes more clearly when using them. Including them via #include <inttypes.h> also includes a bunch of other macros like the printing macros PRIu32, PRIu64,...

To assign a constant value to a fixed length variable I can use macros like UINT32_C() and INT32_C(). I started using them whenever I assigned a constant value.

This leads to code similar to this:

uint64_t i;
for (i = UINT64_C(0); i < UINT64_C(10); i++) { ... }

Now I saw several examples which did not care about that. One is the stdbool.h include file:

#define bool    _Bool
#define false   0
#define true    1

bool has a size of 1 byte on my machine, so it does not look like an int. But 0 and 1 should be integers which should be turned automatically into the right type by the compiler. If I would use that in my example the code would be much easier to read:

uint64_t i;
for (i = 0; i < 10; i++) { ... }

So when should I use the fixed length constant macros like UINT32_C() and when should I leave that work to the compiler(I'm using GCC)? What if I would write code in MISRA C?

like image 273
TimFinnegan Avatar asked Nov 26 '16 19:11

TimFinnegan


People also ask

What is UINT32_C?

UINT32_C is a macro for writing a constant of type uint_least32_t . Such a constant is suitable e.g. for initializing an uint32_t variable.

What is the difference between uint32 and uint32_t?

typedef unsigned integer type uint32_t; // optional //... } uint32 is not, it's a shortcut provided by some compilers (probably as typedef uint32_t uint32 ) for ease of use. More likely as a typedef for something that was known to be an unsigned 32 bit integer at a time before <cstdint> was standard.

What is __ Size_type __?

Note that __SIZE_TYPE__ isn't a variable; it's a type. Compilers other than GCC probably do not provide it, unless they're trying to be compatible with GCC. If you want size_t , include <stddef. h> if you aren't including any of the other headers (such as <stdio. h> , <string.


2 Answers

As a rule of thumb, you should use them when the type of the literal matters. There are two things to consider: the size and the signedness.

Regarding size:

An int type is guaranteed by the C standard values up to 32767. Since you can't get an integer literal with a smaller type than int, all values smaller than 32767 should not need to use the macros. If you need larger values, then the type of the literal starts to matter and it is a good idea to use those macros.

Regarding signedness:

Integer literals with no suffix are usually of a signed type. This is potentially dangerous, as it can cause all manner of subtle bugs during implicit type promotion. For example (my_uint8_t + 1) << 31 would cause an undefined behavior bug on a 32 bit system, while (my_uint8_t + 1u) << 31 would not.

This is why MISRA has a rule stating that all integer literals should have an u/U suffix if the intention is to use unsigned types. So in my example above you could use my_uint8_t + UINT32_C(1) but you can as well use 1u, which is perhaps the most readable. Either should be fine for MISRA.


As for why stdbool.h defines true/false to be 1/0, it is because the standard explicitly says so. Boolean conditions in C still use int type, and not bool type like in C++, for backwards compatibility reasons.

It is however considered good style to treat boolean conditions as if C had a true boolean type. MISRA-C:2012 has a whole set of rules regarding this concept, called essentially boolean type. This can give better type safety during static analysis and also prevent various bugs.

like image 87
Lundin Avatar answered Oct 18 '22 01:10

Lundin


It's for using smallish integer literals where the context won't result in the compiler casting it to the correct size.

I've worked on an embedded platform where int is 16 bits and long is 32 bits. If you were trying to write portable code to work on platforms with either 16-bit or 32-bit int types, and wanted to pass a 32-bit "unsigned integer literal" to a variadic function, you'd need the cast:

#define BAUDRATE UINT32_C(38400)
printf("Set baudrate to %" PRIu32 "\n", BAUDRATE);

On the 16-bit platform, the cast creates 38400UL and on the 32-bit platform just 38400U. Those will match the PRIu32 macro of either "lu" or "u".

I think that most compilers would generate identical code for (uint32_t) X as for UINT32_C(X) when X is an integer literal, but that might not have been the case with early compilers.

like image 37
tomlogic Avatar answered Oct 18 '22 00:10

tomlogic