Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why 0.1 + 0.2 == 0.3 in D?

assert(0.1 + 0.2 != 0.3); // shall be true 

is my favorite check that a language uses native floating point arithmetic.

C++

#include <cstdio>  int main() {    printf("%d\n", (0.1 + 0.2 != 0.3));    return 0; } 

Output:

1 

http://ideone.com/ErBMd

Python

print(0.1 + 0.2 != 0.3) 

Output:

True 

http://ideone.com/TuKsd

Other examples

  • Java: http://ideone.com/EPO6X
  • C#: http://ideone.com/s14tV

Why is this not true for D? As understand D uses native floating point numbers. Is this a bug? Do they use some specific number representation? Something else? Pretty confusing.

D

import std.stdio;  void main() {    writeln(0.1 + 0.2 != 0.3); } 

Output:

false 

http://ideone.com/mX6zF


UPDATE

Thanks to LukeH. This is an effect of Floating Point Constant Folding described there.

Code:

import std.stdio;  void main() {    writeln(0.1 + 0.2 != 0.3); // constant folding is done in real precision     auto a = 0.1;    auto b = 0.2;    writeln(a + b != 0.3);     // standard calculation in double precision } 

Output:

false true 

http://ideone.com/z6ZLk

like image 698
Stas Avatar asked Jul 29 '11 14:07

Stas


People also ask

Why is 0.1 0.2 === 0.3 false and how can you ensure precise decimal arithmetic?

Note that the mantissa is composed of recurring digits of 0011 . This is key to why there is any error to the calculations - 0.1, 0.2 and 0.3 cannot be represented in binary precisely in a finite number of binary bits any more than 1/9, 1/3 or 1/7 can be represented precisely in decimal digits.

When I evaluate 0.1 0.2 In my version of Python it displays 0.30000000000000004 instead of 0.3 Why?

Adding the two after making the exponents same for both would give us: When represented in floating point, this becomes: This is represented by 0.1 + 0.2 . That is precisely the reason behind getting 0.1 + 0.2 = 0.30000000000000004 .

What is the output of 0.1 0.2 in JavaScript?

Conclusion. I was super surprised to learn that 0.1 + 0.2 is actually supposed to equal 0.30000000000000004 in JavaScript because of floating point math.


2 Answers

(Flynn's answer is the correct answer. This one addresses the problem more generally.)


You seem to be assuming, OP, that the floating-point inaccuracy in your code is deterministic and predictably wrong (in a way, your approach is the polar opposite of that of people who don't understand floating point yet).

Although (as Ben points out) floating-point inaccuracy is deterministic, from the point of view of your code, if you are not being very deliberate about what's happening to your values at every step, this will not be the case. Any number of factors could lead to 0.1 + 0.2 == 0.3 succeeding, compile-time optimisation being one, tweaked values for those literals being another.

Rely here neither on success nor on failure; do not rely on floating-point equality either way.

like image 144
Lightness Races in Orbit Avatar answered Sep 23 '22 23:09

Lightness Races in Orbit


It's probably being optimized to (0.3 != 0.3). Which is obviously false. Check optimization settings, make sure they're switched off, and try again.

like image 27
Flynn1179 Avatar answered Sep 23 '22 23:09

Flynn1179