I've been reading the C Primer Plus book and got to this example
#include <stdio.h>
int main(void)
{
float aboat = 32000.0;
double abet = 2.14e9;
long double dip = 5.32e-5;
printf("%f can be written %e\n", aboat, aboat);
printf("%f can be written %e\n", abet, abet);
printf("%f can be written %e\n", dip, dip);
return 0;
}
After I ran this on my macbook I was quite shocked at the output:
32000.000000 can be written 3.200000e+04
2140000000.000000 can be written 2.140000e+09
2140000000.000000 can be written 2.140000e+09
So I looked round and found out that the correct format to display long double is to use %Lf
. However I still can't understand why I got the double abet
value instead of what I got when I ran it on Cygwin, Ubuntu and iDeneb which is roughly
-1950228512509697486020297654959439872418023994430148306244153100897726713609
013030397828640261329800797420159101801613476402327600937901161313172717568.0
00000 can be written 2.725000e+02
Any ideas?
We can print the long double value using %Lf format specifier.
The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double.
Try looking at the varargs calling convention on OSX, that might explain it.
I'm guessing the compiler passes the first long double
parameter on the stack (or in an FPU register), and the first double
parameter in CPU registers (or on the stack). Either way, they're passed in different places. So when the third call is made, the value from the second call is still lying around (and the callee picks it up). But that is just a guess.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With