I have some piece of code that behaves differently under Mac OSX and Linux (Ubuntu, Fedora, ...). This is regarding type casting in arithmetic operations within printf statements. The code is compiled with gcc/g++.
The following
#include <stdio.h>
int main () {
float days = (float) (153*86400) / 86400.0;
printf ("%f\n", days);
float foo = days / 30.6;
printf ("%d\n", (int) foo);
printf ("%d\n", (int) (days / 30.6));
return 0;
}
generates on Linux
153.000000
5
4
and on Mac OSX
153.000000
5
5
Why?
To my surprise this here works on both Mac OSX and Linux
printf ("%d\n", (int) (((float)(153 * 86400) / 86400.0) / 30.6));
printf ("%d\n", (int) (153 / 30.6));
printf ("%.16f\n", (153 / 30.6));
Why? I don't have a clue at all. THX.
try this:
#include <stdio.h>
int main () {
float days = (float) (153*86400) / 86400.0;
printf ("%f\n", days);
float foo = days / 30.6;
printf ("%d\n", (int) foo);
printf ("%d\n", (int) (days / 30.6));
printf ("%d\n", (int) (float)(days / 30.6));
return 0;
}
Notice what happens? The double to float conversion is the culprit. Remember float is always converted to double in a varargs function. I'm not sure why macos would be different, though. Better (or worse) implementation of IEEE arithmetic?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With