I executed the following code
#include <stdio.h>
int main()
{
printf("%f\n", 9/5);
}
Output : 0.000000
why not 1 ?
if i write printf("%f %f %d %d\n", (float)9/5, 4, sizeof(float), sizeof(int));
then output is 1.800000 0.000000 4 59
why not 1.800000 4 4 4
on my machine the sizeof (float) is 4
Thanks in advance
This is because your printf format specifier doesn't match what you passed it:
9/5 is of type int. But printf expects a float.
So you need to either cast it to a float or make either literal a float:
printf("%f\n", (float)9/5);
printf("%f\n", 9./5);
As for why you're getting 0.0, it's because the printf() is reading the binary representation of 1 (an integer) and printing it as a float. Which happens to be a small denormalized value that is very close to 0.0.
EDIT : There's also something going with type-promotion on varargs.
In vararg functions, float is promoted to double. So printf() in this case actually expects a 64-bit parameter holding a double. But you only passed it a 32-bit operand so it's actually reading an extra 32-bits from the stack (which happens to be zero in this case) - even more undefined behavior.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With