Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does JavaScript think 354224848179262000000 and 354224848179261915075 are equal? [duplicate]

Tags:

javascript

So, I started out by trying to find the 100th Fibonacci number using a recursive function and by memoizing the function using the following code.

Function.prototype.memoize = function () {
    var originalFunction = this,
        slice = Array.prototype.slice;
        cache = {};
    return function () {
        var key = slice.call(arguments);
        if (key in cache) {
            return cache[key];
        } else {
            return cache[key] = originalFunction.apply(this, key);
        }
    };
};

var fibonacci = function (n) {
    return n === 0 || n === 1 ? n : fibonacci(n - 1) + fibonacci(n - 2);
}.memoize();

console.log(fibonacci(100));

Now, as you can see in this fiddle, JavaScript logs 354224848179262000000 as the result. The hundredth Fibonacci number is actually 354224848179261915075 according to WolframAlpha which is correct.

Now, my question is this. Why does the number compute incorrectly, even though the algorithm is completely sane? My thoughts point to JavaScript because according to Google's calculator1, the two numbers are equal.

What is it about JavaScript that causes such an error? The number is safely within limits of the maximum value of an IEEE 754 number, which is 1.7976931348623157e+308.

1In case this could be a bug on my platform, I have tested this on both Chromium and Firefox on Ubuntu.

like image 675
Siddharth Avatar asked Aug 21 '14 17:08

Siddharth


2 Answers

As your number gets bigger, your lose precision, the maximum safe number in JavaScript is actually Number.MAX_SAFE_INTEGER === 9007199254740991

For every extra bit that is required you lose 1 bit of precision because the last bit is assumed to be zero.

according to IEEE754 354224848179262000000 equals in binary to:

0 10001000011 0011001100111101101101110110101001111100010110010110

The exponent, 10001000011 is 1091, which results to 68 if you subtract 1023.
This means you are using 68 bits to represent your significant, since their are only 52 bits available for the significant, the last 16 bits are assumed to be zero. Any calculations that fall in those 16 bits will have no effect.

like image 142
Willem D'Haeseleer Avatar answered Nov 06 '22 10:11

Willem D'Haeseleer


Because IEEE-754 double precision floating point only provides 15 to 17 significant digits of precision.

like image 45
Robert Harvey Avatar answered Nov 06 '22 09:11

Robert Harvey