Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Avoiding problems with JavaScript's weird decimal calculations

I just read on MDN that one of the quirks of JS's handling of numbers due to everything being "double-precision 64-bit format IEEE 754 values" is that when you do something like .2 + .1 you get 0.30000000000000004 (that's what the article reads, but I get 0.29999999999999993 in Firefox). Therefore:

(.2 + .1) * 10 == 3 

evaluates to false.

This seems like it would be very problematic. So what can be done to avoid bugs due to the imprecise decimal calculations in JS?

I've noticed that if you do 1.2 + 1.1 you get the right answer. So should you just avoid any kind of math that involves values less than 1? Because that seems very impractical. Are there any other dangers to doing math in JS?

Edit:
I understand that many decimal fractions can't be stored as binary, but the way most other languages I've encountered appear to deal with the error (like JS handles numbers greater than 1) seems more intuitive, so I'm not used to this, which is why I want to see how other programmers deal with these calculations.

like image 801
Lèse majesté Avatar asked Feb 18 '11 05:02

Lèse majesté


People also ask

How do you ignore decimal values?

TRUNC function: Besides the value you will remove digits after decimal, enter the formula =TRUNC(E2,0) into a blank cell, and then drag the Fill Handle to the range you need.

How do you declare a decimal in Java?

In Java, we have two primitive types that represent decimal numbers – float and decimal: double myDouble = 7.8723d; float myFloat = 7.8723f; The number of decimal places can be different depending on the operations being performed. In most cases, we're only interested in the first couple of decimal places.


2 Answers

In situations like these you would tipically rather make use of an epsilon estimation.

Something like (pseudo code)

if (abs(((.2 + .1) * 10) - 3) > epsilon) 

where epsilon is something like 0.00000001, or whatever precision you require.

Have a quick read at Comparing floating point numbers

like image 27
Adriaan Stander Avatar answered Sep 22 '22 04:09

Adriaan Stander


1.2 + 1.1 may be ok but 0.2 + 0.1 may not be ok.

This is a problem in virtually every language that is in use today. The problem is that 1/10 cannot be accurately represented as a binary fraction just like 1/3 cannot be represented as a decimal fraction.

The workarounds include rounding to only the number of decimal places that you need and either work with strings, which are accurate:

(0.2 + 0.1).toFixed(4) === 0.3.toFixed(4) // true 

or you can convert it to numbers after that:

+(0.2 + 0.1).toFixed(4) === 0.3 // true 

or using Math.round:

Math.round(0.2 * X + 0.1 * X) / X === 0.3 // true 

where X is some power of 10 e.g. 100 or 10000 - depending on what precision you need.

Or you can use cents instead of dollars when counting money:

cents = 1499; // $14.99 

That way you only work with integers and you don't have to worry about decimal and binary fractions at all.

2017 Update

The situation of representing numbers in JavaScript may be a little bit more complicated than it used to. It used to be the case that we had only one numeric type in JavaScript:

  • 64-bit floating point (the IEEE 754 double precision floating-point number - see: ECMA-262 Edition 5.1, Section 8.5 and ECMA-262 Edition 6.0, Section 6.1.6)

This is no longer the case - not only there are currently more numerical types in JavaScript today, more are on the way, including a proposal to add arbitrary-precision integers to ECMAScript, and hopefully, arbitrary-precision decimals will follow - see this answer for details:

  • Difference between floats and ints in Javascript?

See also

Another relevant answer with some examples of how to handle the calculations:

  • Node giving strange output on sum of particular float digits
like image 75
rsp Avatar answered Sep 24 '22 04:09

rsp