char byte = 0xff;
printf("%lu\n", sizeof(byte)) // Output is '1'
printf("%x\n", byte); // Output is 'ffffffff'
If the size of byte
is only one byte, then why does printf()
behave as if it is four bytes?
Formally, your program exhibits undefined behavior: %x
format specification expects an argument of type unsigned int
, but you are passing an int
, as explained below (hat tip @R). This is harmless in practice on modern two's-complement machines, since int and unsigned have compatible bit layouts. But again, technically, this is undefined behavior and it would be a good idea to fix it, as in printf("%x\n", (unsigned)byte);
.
The rules for passing parameters to variadic functions state that all integral types smaller than int get promoted to int. Otherwise, how would printf
know, upon seeing %x
, whether to grab one byte or four bytes off the stack? From the standard:
5.2.2p7 :
When there is no parameter for a given argument, the argument is passed in such a way that the receiving function can obtain the value of the argument by invoking va_arg(18.10)...
If the argument has integral or enumeration type that is subject to the integral promotions(4.5),
or a floating point type that is subject to the floating point promotion(4.6),
the value of the argument is converted to the promoted type before the call.
This is how your char
turns into an int
. It's unspecified whether char
is signed or unsigned, but apparently, on the platform you use it's a signed type. So it gets sign-extended when promoted to int
. 0xff
is (char)-1
, and 0xffffffff
is (int)-1
.
I think it's due to integer promotion
A good blog post on this concept: http://www.idryman.org/blog/2012/11/21/integer-promotion/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With