I've encountered an annoying problem in outputting a floating point number. When I format 11.545 with a precision of 2 decimal points on Windows it outputs "11.55", as I would expect. However, when I do the same on Linux the output is "11.54"!
I originally encountered the problem in Python, but further investigation showed that the difference is in the underlying C runtime library. (The architecture is x86-x64 in both cases.) Running the following line of C produces the different results on Windows and Linux, same as it does in Python.
printf("%.2f", 11.545);
To shed more light on this I printed the number to 20 decimal places ("%.20f"
):
Windows: 11.54500000000000000000
Linux: 11.54499999999999992895
I know that 11.545 cannot be stored precisely as a binary number. So what appears to be happening is that Linux outputs the number it's actually stored with the best possible precision, while Windows outputs the simplest decimal representation of it, ie. tries to guess what the user most likely meant.
My question is: is there any (reasonable) way to emulate the Linux behaviour on Windows?
(While the Windows behaviour is certainly the intuitive one, in my case I actually need to compare the output of a Windows program with that of a Linux program and the Windows one is the only one I can change. By the way, I tried to look at the Windows source of printf
, but the actual function that does the float->string conversion is _cfltcvt_l
and its source doesn't appear to be available.)
EDIT: the plot thickens! The theory about this being caused by an imprecise representation might be wrong, because 0.125 does have an exact binary representation and it's still different when output with '%.2f' % 0.125
:
Windows: 0.13
Linux: 0.12
However, round(0.125, 2)
returns 0.13 on both Windows and Linux.
Python programming language has many inbuilt libraries and functions. One among them is the float() function. With the help of the float() function, we can convert an input string or an integer value to a floating point value.
Because JavaScript uses the IEEE 754 standard for Math, it makes use of 64-bit floating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working in Base 2 while decimal is Base 10.
Because floating-point numbers have a limited number of digits, they cannot represent all real numbers accurately: when there are more digits than the format allows, the leftover ones are omitted - the number is rounded.
I don't think Windows is doing anything especially clever (like trying to reinterpret the float in base 10) here: I'd guess that it's simply computing the first 17 significant digits accurately (which would give '11.545000000000000') and then tacking extra zeros on the end to make up the requested number of places after the point.
As others have said, the different results for 0.125 come from Windows using round-half-up and Linux using round-half-to-even.
Note that for Python 3.1 (and Python 2.7, when it appears), the result of formatting a float will be platform independent (except possibly on unusual platforms).
The decimal module gives you access to several rounding modes:
import decimal
fs = ['11.544','11.545','11.546']
def convert(f,nd):
# we want 'nd' beyond the dec point
nd = f.find('.') + nd
c1 = decimal.getcontext().copy()
c1.rounding = decimal.ROUND_HALF_UP
c1.prec = nd
d1 = c1.create_decimal(f)
c2 = decimal.getcontext().copy()
c2.rounding = decimal.ROUND_HALF_DOWN
c2.prec = nd
d2 = c2.create_decimal(f)
print d1, d2
for f in fs:
convert(f,2)
You can construct a decimal from an int or a string. In your case feed it a string with more digits than you want and truncate by setting context.prec.
Here is a link to a pymotw post w/ a detailed overview of the decimal module:
http://broadcast.oreilly.com/2009/08/pymotw-decimal---fixed-and-flo.html
First of all it sounds like Windows has it wrong right in this case (not that this really matters). The C Standard requires that the value output by %.2f
is rounded to the appropriate number of digits. The best known algorithm for this is dtoa implemented by David M. Gay. You can probably port this to Windows or find a native implementation.
If you haven't already read "How to Print Floating-Point Numbers Accurately" by Steele and White, find a copy and read it. It is definitely an enlightening read. Make sure to find the original from the late 70's. I think that I purchased mine from ACM or IEEE at some point.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With