Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Initializing floating point variable with large literal

#include <stdio.h>

int main(void) {
    double x = 0.12345678901234567890123456789;
    printf("%0.16f\n", x);
    return 0;
};

In the code above I'm initializing x with literal that is too large to be represented by the IEEE 754 double. On my PC with gcc 4.9.2 it works well. The literal is rounded to the nearest value that fits into double. I'm wondering what happens behind the scene (on the compiler level) in this case? Does this behaviour depend on the platform? Is it legal?

like image 772
Nikolai Popov Avatar asked Aug 08 '16 13:08

Nikolai Popov


People also ask

What happens when a floating point literal is assigned to an integer type variable?

When you assign a floating point value (literal or variable) to an integer variable, the decimal part of the number is discarded.

Can we assign a float variable to a long integer variable?

You can safely assign a floating point variable to an integer variable, the compiler will just truncate (not round) the value.

What is a floating point literal?

Floating-point literals are numbers that have a decimal point or an exponential part. They can be represented as: Real literals. Binary floating-point literals. Hexadecimal floating-point literals (C only)


1 Answers

When you write double x = 0.1;, the decimal number you have written is rounded to the nearest double. So what happens when you write 0.12345678901234567890123456789 is not fundamentally different.

The behavior is essentially implementation-defined, but most compilers will use the nearest representable double in place of the constant. The C standard specifies that it has to be either the double immediately above or the one immediately below.

like image 65
Pascal Cuoq Avatar answered Nov 01 '22 17:11

Pascal Cuoq