As everybody knows, you have limited precision when you use printf
to output the value of a float
.
However, there is a trick to increase the accuracy in the output, as this example shows:
#include <stdio.h>
int main()
{
float f = 1318926965; /* 10 random digits */
printf("%10.f\n", f); /* prints only 8 correct digits */
printf("%10d\n", *(int*)&f); /* prints all digits correctly */
return 0;
}
and my question is, why don't people use this trick more often?
float is a 32-bit IEEE 754 single precision Floating Point Number – 1 bit for the sign, 8 bits for the exponent, and 23* for the value. float has 7 decimal digits of precision.
The precision of the float is 24 bits. There are 23 bits denoting the fraction after the binary point, plus there's also an "implicit leading bit", according to the online source. This gives 24 significant bits in total.
%s refers to a string %d refers to an integer %c refers to a character. Therefore: %s%d%s%c\n prints the string "The first character in sting ", %d prints i, %s prints " is ", and %c prints str[0].
April fool?
Your "random number" 1318926965
have the same underlying representation both in decimal and floating-point form.
Try another value, like 10
. It will print as:
10
1092616192
So to answer your question:
and my question is, why don't people use this trick more often?
Because only one day of the year is April Fools Day... The rest of the days the trick doesn't work...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With