When are these macros actually needed?
My systems (gcc/glibc/linux/x86_64) stdint.h
uses (__
-prefixed) variant of these to define:
# define INT64_MIN (-__INT64_C(9223372036854775807)-1)
# define INT64_MAX (__INT64_C(9223372036854775807))
# define UINT64_MAX (__UINT64_C(18446744073709551615))
# define INT_LEAST64_MIN (-__INT64_C(9223372036854775807)-1)
# define INT_LEAST64_MAX (__INT64_C(9223372036854775807))
# define UINT_LEAST64_MAX (__UINT64_C(18446744073709551615))
# define INT_FAST64_MIN (-__INT64_C(9223372036854775807)-1)
# define INT_FAST64_MAX (__INT64_C(9223372036854775807))
# define UINT_FAST64_MAX (__UINT64_C(18446744073709551615))
# define INTMAX_MIN (-__INT64_C(9223372036854775807)-1)
# define INTMAX_MAX (__INT64_C(9223372036854775807))
# define UINTMAX_MAX (__UINT64_C(18446744073709551615))
Yet for limits.h
it seems to make do with:
# define LONG_MAX 9223372036854775807L
# define ULONG_MAX 18446744073709551615UL
Why can't stdint.h
forget about the _C
macros and simply do:
# define INT_LEAST64_MAX 9223372036854775807 //let it grow as needed
# define UINT_LEAST64_MAX 18446744073709551615U //just the U
What are the use cases for these macros?
The only one I could think of is where I want a sufficiently wide constant usable in cpp conditionals and at the same time I don't want it too wide:
//C guarantees longs are at least 32 bits wide
#define THREE_GIGS_BUT_MAYBE_TOO_WIDE (1L<<30)
#define THREE_GIGS (INT32_C(1)<<30) //possibly narrower than the define above
What is the point of the
{U,}INTn_C
macros in<stdint.h>
?
They insure a minimal type width and sign-ness for a constant.
They "expand to an integer constant expression corresponding to the type (u)int_leastN_t.
"
123 << 50 // likely int overflow (UB)
INT32_C(123) << 50 // likely int overflow (UB)
INT64_C(123) << 50 // well defined.
INT32_C(123)*2000000000 // likely int overflow (UB)
UINT32_C(123)*2000000000 // well defined - even though it may mathematically overflow
Useful when defining computed constants.
// well defined, but the wrong product when unsigned is 32-bit
#define TBYTE (1024u*1024*1024*1024)
// well defined, and specified to be 1099511627776u
#define TBYTE (UINT64_C(1024)*1024*1024*1024)
It also affects code via _Generic
. The below could steer code to unsigned long
, unsigned
and unsigned long long
.
(unsigned long) 123
UINT32_C(123)
UINT64_C(123)
The point of e.g. __UINT64_C(x)
seems to be to attach the correct kind of suffix to x
.
In this way, the implementer of the C standard library header files are able to separate the numerical constants (which are the same on all platforms) from the suffixes (which depend on the integer size).
For example, when building a 64-bit executable, the __UINT64_C(x)
would evaluate to x ## UL
, while when building a 32-bit executable, it would evaluate to x ## ULL
.
Edit: as @PSkocik points out, for signed integers this macro is not necessary. My guess is that it is still present as (1) the suffix is necessary for unsigned values and (2) the authors wanted to keep the code consistent for signed and unsigned constants.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With