Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does printf() depend on order of format specifiers?

#include<stdio.h>
main()
{
    float x=2;
    float y=4;
    printf("\n%d\n%f",x/y,x/y);
    printf("\n%f\n%d",x/y,x/y);
}

Output:

0 
0.000000
0.500000 
0

compiled with gcc 4.4.3 The program exited with error code 12

like image 674
blacktooth Avatar asked Nov 28 '22 06:11

blacktooth


1 Answers

As noted in other answers, this is because of the mismatch between the format string and the type of the argument.

I'll guess that you're using x86 here (based on the observed results).

The arguments are passed on the stack, and x/y, although of type float, will be passed as a double to a varargs function (due to type "promotion" rules).

An int is a 32-bit value, and a double is a 64-bit value.

In both cases you are passing x/y (= 0.5) twice. The representation of this value, as a 64-bit double, is 0x3fe0000000000000. As a pair of 32-bit words, it's stored as 0x00000000 (least significant 32 bits) followed by 0x3fe00000 (most significant 32-bits). So the arguments on the stack, as seen by printf(), look like this:

0x3fe00000
0x00000000
0x3fe00000
0x00000000  <-- stack pointer

In the first of your two cases, the %d causes the first 32-bit value, 0x00000000, to be popped and printed. The %f pops the next two 32-bit values, 0x3fe00000 (least significant 32 bits of 64 bit double), followed by 0x00000000 (most significant). The resulting 64-bit value of 0x000000003fe00000, interpreted as a double, is a very small number. (If you change the %f in the format string to %g you'll see that it's almost 0, but not quite).

In the second case, the %f correctly pops the first double, and the %d pops the 0x00000000 half of the second double, so it appears to work.

like image 110
Matthew Slattery Avatar answered Dec 06 '22 21:12

Matthew Slattery