I imagine this is a classic floating point precision question, but I am trying to wrap my head around this result, running 1//0.01
in Python 3.7.5 yields 99
.
I imagine it is an expected result, but is there any way to decide when it is safer to use int(1/f)
rather than 1//f
?
This works because: If the digit in the first decimal place of the shifted value is less than five, then adding 0.5 won't change the integer part of the shifted value, so the floor is equal to the integer part.
As for why 1.0 == 1 , it's because 1.0 and 1 represent the same number. Python doesn't require that two objects have the same type for them to be considered equal. Or if you want to also accept third-party implementation of Python's integer interface, you can do isinstance(x, numbers.
Python Decimal default precision The Decimal has a default precision of 28 places, while the float has 18 places. The example compars the precision of two floating point types in Python.
If this were division with real numbers, 1//0.01
would be exactly 100. Since they are floating-point approximations, though, 0.01
is slightly larger than 1/100, meaning the quotient is slightly smaller than 100. It's this 99.something value that is then floored to 99.
The reasons for this outcome are like you state, and are explained in Is floating point math broken? and many other similar Q&A.
When you know the number of decimals of numerator and denominator, a more reliable way is to multiply those numbers first so they can treated as integers, and then perform integer division on them:
So in your case 1//0.01
should be converted first to 1*100//(0.01*100)
which is 100.
In more extreme cases you can still get "unexpected" results. It might be necessary to add a round
call to numerator and denominator before performing the integer division:
1 * 100000000000 // round(0.00000000001 * 100000000000)
But, if this is about working with fixed decimals (money, cents), then consider working with cents as unit, so that all arithmetic can be done as integer arithmetic, and only convert to/from the main monetary unit (dollar) when doing I/O.
Or alternatively, use a library for decimals, like decimal, which:
...provides support for fast correctly-rounded decimal floating point arithmetic.
from decimal import Decimal cent = Decimal(1) / Decimal(100) # Contrary to floating point, this is exactly 0.01 print (Decimal(1) // cent) # 100
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With