I was playing around with python when I tried
>>> Decimal(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
which I thought was normal because of the floating point inaccuracy. I also expected that 0.1 * 10 would be slightly greater than 1.0
I then tried
>>> Decimal(0.1 * 10)
Decimal('1')
>>> 0.1 * 10 == 1.0
True
which is weird because they shouldn't be equal.
>>> sum = 0.0
>>> for i in range(10):
sum += 0.1
>>> Decimal(sum)
Decimal('0.99999999999999988897769753748434595763683319091796875')
which is also weird because it's supposed to be slightly greater than 1.0
can someone explain this to me.
I am not sure if this is relevant but I used python 3.5.2 and python 2.7.12 on windows 8.1 64-bit.
The exact value of decimal 0.1 can't be represented in 64-bit binary floating-point, so it gets rounded to the nearest representable value, which is 0.1000000000000000055511151231257827021181583404541015625.
However, while the exact value of 0.1000000000000000055511151231257827021181583404541015625 * 10 can be represented in binary, it would take more bits of precision than 64-bit binary floating-point has. The result also gets rounded to the nearest representable value, and it turns out the nearest representable value is exactly 1.0.
Basically, you have two rounding errors, and they happen to cancel.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With