Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why 1//0.01 == 99 in Python?

Tags:

python

I imagine this is a classic floating point precision question, but I am trying to wrap my head around this result, running 1//0.01 in Python 3.7.5 yields 99.

I imagine it is an expected result, but is there any way to decide when it is safer to use int(1/f) rather than 1//f ?

like image 485
Albert James Teddy Avatar asked Feb 20 '20 15:02

Albert James Teddy


People also ask

Why does 0.5 round down python?

This works because: If the digit in the first decimal place of the shifted value is less than five, then adding 0.5 won't change the integer part of the shifted value, so the floor is equal to the integer part.

Is 1 the same as 1.0 in Python?

As for why 1.0 == 1 , it's because 1.0 and 1 represent the same number. Python doesn't require that two objects have the same type for them to be considered equal. Or if you want to also accept third-party implementation of Python's integer interface, you can do isinstance(x, numbers.

How many decimal places is Python accurate to?

Python Decimal default precision The Decimal has a default precision of 28 places, while the float has 18 places. The example compars the precision of two floating point types in Python.


2 Answers

If this were division with real numbers, 1//0.01 would be exactly 100. Since they are floating-point approximations, though, 0.01 is slightly larger than 1/100, meaning the quotient is slightly smaller than 100. It's this 99.something value that is then floored to 99.

like image 177
chepner Avatar answered Oct 02 '22 16:10

chepner


The reasons for this outcome are like you state, and are explained in Is floating point math broken? and many other similar Q&A.

When you know the number of decimals of numerator and denominator, a more reliable way is to multiply those numbers first so they can treated as integers, and then perform integer division on them:

So in your case 1//0.01 should be converted first to 1*100//(0.01*100) which is 100.

In more extreme cases you can still get "unexpected" results. It might be necessary to add a round call to numerator and denominator before performing the integer division:

1 * 100000000000 // round(0.00000000001 * 100000000000) 

But, if this is about working with fixed decimals (money, cents), then consider working with cents as unit, so that all arithmetic can be done as integer arithmetic, and only convert to/from the main monetary unit (dollar) when doing I/O.

Or alternatively, use a library for decimals, like decimal, which:

...provides support for fast correctly-rounded decimal floating point arithmetic.

from decimal import Decimal cent = Decimal(1) / Decimal(100) # Contrary to floating point, this is exactly 0.01 print (Decimal(1) // cent) # 100 
like image 22
trincot Avatar answered Oct 02 '22 16:10

trincot