Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does rounding the floating-point number 1.4999999999999999 produce 2?

I've been reading a book Write Great code - Understanding the Machine. In the section about rounding it says:

Numbers should be rounded to the smallest bigger number if the decimal bit value is more than or equal half the total decimal value that can be represented.

which means:

round(1.5) // equals 2
round(1.49) // equals 1

but when I tried this with Python:

x1 = 1.4999  # rounds to 1

x2 = 1.4999999999999999  # rounds to 2

print(round(x1))

print(round(x2))

the output was:

1

2

I tried the same thing with C# and Swift and it gave the same output. So I assume it's a language-agnostic topic.

But why does this happen?

My assumption is that the floating-point unit rounds the extra bits which convert the "1.4999999999999999999" to "1.5" before applying the programmer's rounding.

like image 223
Ramy M. Mousa Avatar asked Jan 27 '23 09:01

Ramy M. Mousa


1 Answers

In x2 = 1.4999999999999999 and print(round(x2)), there are two operations that affect the value. The round function cannot operate directly on the number 1.4999999999999999 or the numeral “1.4999999999999999”. Its operand must be in the floating-point format that the Python implementation uses.

So, first, 1.4999999999999999 is converted to the floating-point format. Python is not strict about which floating-point format a Python implementation uses, but the IEEE-754 basic 64-bit binary format is common. In this format, the closest representable values to 1.4999999999999999 are 1.5 and 1.4999999999999997779553950749686919152736663818359375. The former is closer to 1.4999999999999999 than the latter is, so the former is used.

Thus, converting 1.4999999999999999 to the floating-point format produces 1.5. Then round(1.5) produces 2.

like image 183
Eric Postpischil Avatar answered Jan 28 '23 22:01

Eric Postpischil