In Python 2.6.5 the following expression yields False:
>>> import decimal
>>> decimal.Decimal('0') < 1.0
False
Is there a rationale explaining why comparison of Decimal against float should behave like this?
From the documentation of the decimal module:
Changed in version 2.7: A comparison between a float instance x and a Decimal instance y now returns a result based on the values of x and y. In earlier versions x < y returned the same (arbitrary) result for any Decimal instance x and any float instance y.
So it looks like that was a bug/missing feature and all you need to do is upgrade.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With