Can you tell me what exactly does the u
after a number, for example:
#define NAME_DEFINE 1u
The %u format specifier is implemented for fetching values from the address of a variable having an unsigned decimal integer stored in the memory. It is used within the printf() function for printing the unsigned integer variable.
1U is unsigned. It can carry values twice as big, but without negative values. Depending on the environment, when using U, i can be a maximum of either 31 or 15, without causing an overflow. Without using U, i can be a maximum of 30 or 14. 31, 30 are for 32 bit int.
It means it's an unsigned int constant. It's a way of telling the compiler to use a specific type for a constant where it wouldn't otherwise know the type.
-10u is a "unary minus" operator applied to a constant 10u of type unsigned int . The result is value of -10 represented in the domain of unsigned int type. According to the rules of unsigned arithmetic, the final result will be equal to UINT_MAX + 1 - 10 and will have type unsigned int .
Integer literals like 1
in C code are always of the type int
. int
is the same thing as signed int
. One adds u
or U
(equivalent) to the literal to ensure it is unsigned int, to prevent various unexpected bugs and strange behavior.
One example of such a bug:
On a 16-bit machine where int is 16 bits, this expression will result in a negative value:
long x = 30000 + 30000;
Both 30000 literals are int, and since both operands are int, the result will be int. A 16-bit signed int can only contain values up to 32767, so it will overflow. x
will get a strange, negative value because of this, rather than 60000 as expected.
The code
long x = 30000u + 30000u;
will however behave as expected.
It is a way to define unsigned literal integer constants.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With