Can anyone explain how the modulo operator works in Python? I cannot understand why 3.5 % 0.1 = 0.1
.
Actually, it's not true that 3.5 % 0.1
is 0.1
. You can test this very easily:
>>> print(3.5 % 0.1) 0.1 >>> print(3.5 % 0.1 == 0.1) False
In actuality, on most systems, 3.5 % 0.1
is 0.099999999999999811
. But, on some versions of Python, str(0.099999999999999811)
is 0.1
:
>>> 3.5 % 0.1 0.099999999999999811 >>> repr(3.5 % 0.1) '0.099999999999999811' >>> str(3.5 % 0.1) '0.1'
Now, you're probably wondering why 3.5 % 0.1
is 0.099999999999999811
instead of 0.0
. That's because of the usual floating point rounding issues. If you haven't read What Every Computer Scientist Should Know About Floating-Point Arithmetic, you should—or at least the brief Wikipedia summary of this particular issue.
Note also that 3.5/0.1
is not 34
, it's 35
. So, 3.5/0.1 * 0.1 + 3.5%0.1
is 3.5999999999999996
, which isn't even close to 3.5
. This is pretty much fundamental to the definition of modulus, and it's wrong in Python, and just about every other programming language.
But Python 3 comes to the rescue there. Most people who know about //
know that it's how you do "integer division" between integers, but don't realize that it's how you do modulus-compatible division between any types. 3.5//0.1
is 34.0
, so 3.5//0.1 * 0.1 + 3.5%0.1
is (at least within a small rounding error of) 3.5
. This has been backported to 2.x, so (depending on your exact version and platform) you may be able to rely on this. And, if not, you can use divmod(3.5, 0.1)
, which returns (within rounding error) (34.0, 0.09999999999999981)
all the way back into the mists of time. Of course you still expected this to be (35.0, 0.0)
, not (34.0, almost-0.1)
, but you can't have that because of rounding errors.
If you're looking for a quick fix, consider using the Decimal
type:
>>> from decimal import Decimal >>> Decimal('3.5') % Decimal('0.1') Decimal('0.0') >>> print(Decimal('3.5') % Decimal('0.1')) 0.0 >>> (Decimal(7)/2) % (Decimal(1)/10) Decimal('0.0')
This isn't a magical panacea — for example, you'll still have to deal with rounding error whenever the exact value of an operation isn't finitely representable in base 10 - but the rounding errors line up better with the cases human intuition expects to be problematic. (There are also advantages to Decimal
over float
in that you can specify explicit precisions, track significant digits, etc., and in that it's actually the same in all Python versions from 2.4 to 3.3, while details about float
have changed twice in the same time. It's just that it's not perfect, because that would be impossible.) But when you know in advance that your numbers are all exactly representable in base 10, and they don't need more digits than the precision you've configured, it will work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With