#include <stdio.h>
int main(void) {
double x = 0.12345678901234567890123456789;
printf("%0.16f\n", x);
return 0;
};
In the code above I'm initializing x
with literal that is too large to be represented by the IEEE 754 double. On my PC with gcc 4.9.2 it works well. The literal is rounded to the nearest value that fits into double. I'm wondering what happens behind the scene (on the compiler level) in this case? Does this behaviour depend on the platform? Is it legal?
When you assign a floating point value (literal or variable) to an integer variable, the decimal part of the number is discarded.
You can safely assign a floating point variable to an integer variable, the compiler will just truncate (not round) the value.
Floating-point literals are numbers that have a decimal point or an exponential part. They can be represented as: Real literals. Binary floating-point literals. Hexadecimal floating-point literals (C only)
When you write double x = 0.1;
, the decimal number you have written is rounded to the nearest double
. So what happens when you write 0.12345678901234567890123456789
is not fundamentally different.
The behavior is essentially implementation-defined, but most compilers will use the nearest representable double
in place of the constant. The C standard specifies that it has to be either the double
immediately above or the one immediately below.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With