Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Another floating point question

I have read most of the posts on here regarding floating point, and I understand the basic underlying issue that using IEEE 754 (and just by the nature of storing numbers in binary) certain fractions cannot be represented. I am trying to figure out the following: If both Python and JavaScript use the IEEE 754 standard, why is it that executing the following in Python

.1 + .1

Results in 0.20000000000000001 (which is to be expected)

Where as in Javascript (in at least Chrome and Firefox) the answer is .2

However performing

.1 + .2

In both languages results in 0.30000000000000004

In addition, executing var a = 0.3; in JavaScript and printing a results in 0.3

Where as doing a = 0.3 in Python results in 0.29999999999999999

I would like to understand the reason for this difference in behavior.

In addition, many of the posts on OS link to a JavaScript port of Java's BigDecimal, but the link is dead. Does anyone have a copy?

like image 809
jeffmax Avatar asked Jun 15 '10 20:06

jeffmax


People also ask

What are the 2 types of floating-point?

Real floating-point typesfloat. double.

What is an example of a floating-point?

A floating point number, is a positive or negative whole number with a decimal point. For example, 5.5, 0.25, and -103.342 are all floating point numbers, while 91, and 0 are not. Floating point numbers get their name from the way the decimal point can "float" to any position necessary.

Is 3.0 a floating-point number?

The number 3.0 is the literal representation of a double value (it's equivalent to 3.0d ), whereas 3.0f is a float value. The different precisions explain why you're getting different results - a double is stored using 64-bits, a float uses 32-bits.

Is 0.5 A floating-point?

In that case, the result (0.5) can be represented exactly as a floating-point number, and it's possible for rounding errors in the input numbers to cancel each other out - But that can't necessarily be relied upon (e.g. when those two numbers were stored in differently sized floating point representations first, the ...


2 Answers

doing a = 0.3 in Python results in 0.29999999999999999

Not quite -- watch:

>>> a = 0.3
>>> print a
0.3
>>> a
0.29999999999999999

As you see, printing a does show 0.3 -- because by default print rounds to 6 or 7 decimal digits, while typing an expression (here a is a single-variable expression) at the prompt shows the result with over twice as many digits (thus revealing floating point's intrinsic limitations).

Javascript may have slightly different rounding rules about how to display numbers, and the exact details of the rounding are plenty enough to explain the differences you observe. Note, for example (on a Chrome javascript console):

> (1 + .1) * 1000000000
  1100000000
> (1 + .1) * 100000000000000
  110000000000000.02

see? if you manage to see more digits, the anomalies (which inevitably are there) become visible too.

like image 184
Alex Martelli Avatar answered Sep 27 '22 21:09

Alex Martelli


and printing.

They might both have the same IEEE 754 underlying representation, but that doesn't mean they're forced to print the same way. It looks like Javascript is rounding the output when the difference is small enough.

With floating point numbers, the important part is how the binary data is structured, not what it shows on the screen.

like image 20
Stephen Avatar answered Sep 27 '22 21:09

Stephen