If you put the following into App Engine Shell you get '50.49'
. This result is consistent on both the 2.5 and 2.7 runtimes.
>> a = '%0.2f' % (round(float(u'50.485'), 2),)
>> a
'50.49'
However if I put the same thing into my local MacBook Pro running python 2.7.1 I get '50.48'
.
>> a = '%0.2f' % (round(float(u'50.485'), 2),)
>> a
'50.48'
Why is this different and how can I get consistency between my local machine and App Engine's servers?
Apparently, Google App Engine uses "float" C type (IEEE 754 single precision), while local Python uses "double" instead (IEEE 754 double precision).
I have suspected that CPython has a configure switch to use 'float' instead of 'double', but haven't found one.
Funny, this exact topic is covered in the official Python tutorial.
Floating Point Arithmetic: Issues and Limitations
http://docs.python.org/2/tutorial/floatingpoint.html
Other surprises follow from this one. For example, if you try to round the value 2.675 to two decimal places, you get this
>>> >>> round(2.675, 2) 2.67
The documentation for the built-in round() function says that it rounds to the nearest value, rounding ties away from zero. Since the decimal fraction 2.675 is exactly halfway between 2.67 and 2.68, you might expect the result here to be (a binary approximation to) 2.68. It’s not, because when the decimal string 2.675 is converted to a binary floating-point number, it’s again replaced with a binary approximation, whose exact value is
2.67499999999999982236431605997495353221893310546875
Since this approximation is slightly closer to 2.67 than to 2.68, it’s rounded down.
If you’re in a situation where you care which way your decimal halfway-cases are rounded, you should consider using the decimal module. Incidentally, the decimal module also provides a nice way to “see” the exact value that’s stored in any particular Python float
>>> >>> from decimal import Decimal >>> Decimal(2.675) Decimal('2.67499999999999982236431605997495353221893310546875')
One of the articles linked from the official Python tutorial (http://www.lahey.com/float.htm), despite using FORTRAN, correctly describes what IEEE 754 really means:
Different computers use different numbers of bits to store floating-point numbers. Even when the same IEEE formats are used for storing numbers, differences in calculations can occur because of the size of intermediate registers. To increase portability and to ensure consistent results, I recommend against comparing for exact equality of real numbers in FORTRAN.
So if you really care about this then definitely use the built-in decimal module (or the cdecimal module, available here http://pypi.python.org/pypi/cdecimal/2.3, if performance is an issue).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With