There is such code:
#include <stdio.h>
int main() {
float d = 1.0;
int i = 2;
printf("%d %d", d, i);
getchar();
return 0;
}
And the output is:
0 1072693248
I know that there is error in printf and first %d should be replaced with %f. But why variable i is printed wrong (1072693248 instead of 2)?
Since you specified %d
instead of %f
, what you're really seeing is the binary representation of d
as an integer.
Also, since the datatypes don't match, the code actually has undefined behavior.
EDIT:
Now to explain why you don't see the 2
:
float
gets promoted to double
on the stack. Type double
is (in this case) 8 bytes long. However, since your printf
specifies two integers (both 4 bytes in this case), you are seeing the binary representations of 1.0
as a type double
. The 2 isn't printed because it is beyond the 8 bytes that your printf
expects.
printf
doesn't just use the format codes to decide how to print its arguments. It uses them to decide how to access its arguments (it uses va_arg
internally). Because of this, when you give the wrong format code for the first argument (%d
instead of %f
) you don't just mess up the printing of the first argument, you make it look in the wrong place for all subsequent arguments. That's why you're getting nonsense for the second argument.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With