I am studying numerical analysis and I have come across this dilemma.
Running the following script,
from decimal import Decimal
a = 0.1 ;
N = 100000 ;
# product calculation
P = N*a
# Print product result with no apparent error
print(' %.22f ' % P)
# Print product result with full Decimal approximation of 0.1
print(Decimal(0.1) * 100000)
I realize that despite 0.1 not having an accurate floating-point representation, when I multiply it by 100000 (which has an exact floating-point representation), and increase the precision of how I print the result, I do not notice any error.
print(' %.22f ' % P) # Result: 10000.0000000000000000000000
This is in contrast to the case where I use the Decimal method, where I can see the error behind the product.
print(Decimal(0.1) * 100000)
Also, how come I can print up to 55th digits of precision of a number if the IEEE754 standard only allows 53? I reproduced this case with the following instruction:
print("%.55f" % 0.1) #0.1000000000000000055511151231257827021181583404541015625
Can anyone explain why this happens?
a = 0.1 ;
Assuming your Python implementation uses IEEE-754 binary641, this converts 0.1 to 0.1000000000000000055511151231257827021181583404541015625, because that is the representable value that is nearest to 0.1.
P = N*a
The real-number arithmetic product of 100,000 and 0.1000000000000000055511151231257827021181583404541015625 is 10,000.00000000000055511151231257827021181583404541015625. This number is not representable in binary64. The two nearest representable values are 10,000 and 10000.000000000001818989403545856475830078125. The floating-point multiplication produces the representable value that is closer, so N*a produces 10,000.
print(' %.22f ' % P)
This prints the value stored in P, formatted with 22 digits after the decimal point, yielding “10000.0000000000000000000000”.
print(Decimal(0.1) * 100000)
In this, first 0.1 is converted to binary floating-point, yielding 0.1000000000000000055511151231257827021181583404541015625. Then Decimal(0.1) converts that number to Decimal, which produces the same value. Then the multiplication by 100,000 is performed. By default, Python uses only 28 digits for Decimal arithmetic, so the result of this multiplication is rounded to 10,000.00000000000055511151231.
1 This is common, but Python does not have a formal specification, and what documentation there is for it is weak about floating-point behavior.
The short answer is that while binary/decimal conversion and finite-precision roundoff error often introduce slight, visible errors, they do not always do so. In particular, it's not uncommon for two successive errors to cancel each other out, resulting in a perfectly-correct result.
In this case, while the decimal fraction 0.1 is represented in IEEE-754 double-precision floating-point by a binary value which is actually very slightly greater than 0.1, when you multiply it by 100000 you happen to get a value which is relatively greater than 10000 by an even smaller amount, such that it actually rounds down to 10000 exactly. See Eric Postpischil's answer for full details.
(Footnote: it helps that 10000 is exactly representable in binary, while 0.1 is not.)
But then, why did it work less well when you tried to use Decimal? Because you didn't use Decimal all the way. When you start with Decimal(0.1), what you end up with is 0.1 converted to IEEE-754 binary floating-point and then converted to Decimal, which suffers the same issue as the first way did. But now, the multiplication by 100000 happens in Decimal, which doesn't encounter the same, second rounding operation as the first way did, so the canceling-out doesn't happen.
To do it in Decimal all the way, you can use
print(Decimal("0.1") * 100000)
which gives 10000.0, as expected.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With