Duplicates:
How is floating point stored? When does it matter?Is floating point math broken?
Why does the following occur in the Python Interpreter?
>>> 0.1+0.1+0.1-0.3 5.551115123125783e-17 >>> 0.1+0.1 0.2 >>> 0.2+0.1 0.30000000000000004 >>> 0.3-0.3 0.0 >>> 0.2+0.1 0.30000000000000004 >>>
Why doesn't 0.2 + 0.1 = 0.3
?
Note that the mantissa is composed of recurring digits of 0011 . This is key to why there is any error to the calculations - 0.1, 0.2 and 0.3 cannot be represented in binary precisely in a finite number of binary bits any more than 1/9, 1/3 or 1/7 can be represented precisely in decimal digits.
With decimal fractions, this floating-point number system causes some rounding errors in JavaScript. For example, 0.1 and 0.2 cannot be represented precisely. Hence, 0.1 + 0.2 === 0.3 yields false. To really understand why 0.1 cannot be represented properly as a 32-bit floating-point number, you must understand binary.
That's because .1
cannot be represented exactly in a binary floating point representation. If you try
>>> .1
Python will respond with .1
because it only prints up to a certain precision, but there's already a small round-off error. The same happens with .3
, but when you issue
>>> .2 + .1 0.30000000000000004
then the round-off errors in .2
and .1
accumulate. Also note:
>>> .2 + .1 == .3 False
Not all floating point numbers are exactly representable on a finite machine. Neither 0.1 nor 0.2 are exactly representable in binary floating point. And nor is 0.3.
A number is exactly representable if it is of the form a/b where a and b are an integers and b is a power of 2. Obviously, the data type needs to have a large enough significand to store the number also.
I recommend Rob Kennedy's useful webpage as a nice tool to explore representability.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With