Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a case when an integer loses its precision when casted to double?

Tags:

c

casting

Suppose I have

int i=25;
double j=(double)i;

Is there a chance that j will have values 24.9999999..upto_allowed or 25.00000000..._upto_allowed_minus_one_and_then_1. I remember reading such stuff somehere but not able to recall properly.

In other words:

Is there a case when an integer loses its precision when casted to double?

like image 266
sjsam Avatar asked Apr 20 '16 05:04

sjsam


1 Answers

For small numbers like 25, you are good. For very large (absolute) values of ints on architecture where int is 64 bit (having a value not representable in 53 bits) or more, you will loose the precision.

Double precision floating point number has 53 bits of precision of which Most significant bit is (implicitly) usually 1.

On Platforms where floating point representation is not IEEE-754, answer may be a little different. For more details you can refer chapter 5.2.4.2.2 of C99/C11 specs

like image 77
Mohit Jain Avatar answered Oct 11 '22 16:10

Mohit Jain