I've been reading a book Write Great code - Understanding the Machine. In the section about rounding it says:
Numbers should be rounded to the smallest bigger number if the decimal bit value is more than or equal half the total decimal value that can be represented.
which means:
round(1.5) // equals 2
round(1.49) // equals 1
but when I tried this with Python:
x1 = 1.4999 # rounds to 1
x2 = 1.4999999999999999 # rounds to 2
print(round(x1))
print(round(x2))
the output was:
1
2
I tried the same thing with C# and Swift and it gave the same output. So I assume it's a language-agnostic topic.
But why does this happen?
My assumption is that the floating-point unit rounds the extra bits which convert the "1.4999999999999999999" to "1.5" before applying the programmer's rounding.
In x2 = 1.4999999999999999
and print(round(x2))
, there are two operations that affect the value. The round
function cannot operate directly on the number 1.4999999999999999 or the numeral “1.4999999999999999”. Its operand must be in the floating-point format that the Python implementation uses.
So, first, 1.4999999999999999 is converted to the floating-point format. Python is not strict about which floating-point format a Python implementation uses, but the IEEE-754 basic 64-bit binary format is common. In this format, the closest representable values to 1.4999999999999999 are 1.5 and 1.4999999999999997779553950749686919152736663818359375. The former is closer to 1.4999999999999999 than the latter is, so the former is used.
Thus, converting 1.4999999999999999 to the floating-point format produces 1.5. Then round(1.5)
produces 2.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With