Does this for
loop ever stop?
for (var i=0; 1/i > 0; i++) { }
If so, when and why? I was told that it stops, but I was given no reason for that.
As part of the investigation I've written quite lengthy and detailed article that explains everything what's going on under the hood - Here is what you need to know about JavaScript’s Number type
Explanation: In the above for loop, the loop will starts from 1 and stops at 10, so the answer of the following question is 10.
i is a parameter to the function double . In JavaScript, the same function could be done like: var double = function(i){ return i + i; }; In this case an anonymous function is created and then assigned to a variable double .
(I'm not a fan of meta-content, but: gotnull's and le_m's answers are both correct and useful. They were originally, and are even more so with the edits made after this Community Wiki was posted. The original motivation for this CW is largely gone as a result of those edits, but it remains useful, so... Also: While there are only a couple of authors listed, many other community members have helped greatly with comments which have been folded in and cleaned up. This isn't just a CW in name.)
The loop won't stop in a correctly-implemented JavaScript engine. (The engine's host environment might eventually terminate it because it's endless, but that's another thing.)
Here's why:
Initially, when i
is 0
, the condition 1/i > 0
is true because in JavaScript, 1/0
is Infinity
, and Infinity > 0
is true.
After that, i
will be incremented and continue to grow as a positive integer value for a long time (a further 9,007,199,254,740,991 iterations). In all of those cases, 1/i
will remain > 0
(although the values for 1/i
get really small toward the end!) and so the loop continues up to and including the loop where i
reaches the value Number.MAX_SAFE_INTEGER
.
Numbers in JavaScript are IEEE-754 double-precision binary floating point, a fairly compact format (64 bits) which provides for fast calculations and a vast range. It does this by storing the number as a sign bit, an 11-bit exponent, and a 52-bit significand (although through cleverness it actually gets 53 bits of precision). It's binary (base 2) floating point: The significand (plus some cleverness) gives us the value, and the exponent gives us the magnitude of the number.
Naturally, with just so many significant bits, not every number can be stored. Here is the number 1, and the next highest number after 1 that the format can store, 1 + 2-52 ≈ 1.00000000000000022, and the next highest after that 1 + 2 × 2-52 ≈ 1.00000000000000044:
+--------------------------------------------------------------- sign bit / +-------+------------------------------------------------------ exponent / / | +-------------------------------------------------+- significand / / | / | 0 01111111111 0000000000000000000000000000000000000000000000000000 = 1 0 01111111111 0000000000000000000000000000000000000000000000000001 ≈ 1.00000000000000022 0 01111111111 0000000000000000000000000000000000000000000000000010 ≈ 1.00000000000000044
Note the jump from 1.00000000000000022 to 1.00000000000000044; there's no way to store 1.0000000000000003. That can happen with integers, too: Number.MAX_SAFE_INTEGER
(9,007,199,254,740,991) is the highest positive integer value that the format can hold where i
and i + 1
are both exactly representable (spec). Both 9,007,199,254,740,991 and 9,007,199,254,740,992 can be represented, but the next integer, 9,007,199,254,740,993, cannot; the next integer we can represent after 9,007,199,254,740,992 is 9,007,199,254,740,994. Here are the bit patterns, note the rightmost (least significant) bit:
+--------------------------------------------------------------- sign bit / +-------+------------------------------------------------------ exponent / / | +-------------------------------------------------+- significand / / | / | 0 10000110011 1111111111111111111111111111111111111111111111111111 = 9007199254740991 (Number.MAX_SAFE_INTEGER) 0 10000110100 0000000000000000000000000000000000000000000000000000 = 9007199254740992 (Number.MAX_SAFE_INTEGER + 1) x xxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 9007199254740993 (Number.MAX_SAFE_INTEGER + 2) can't be stored 0 10000110100 0000000000000000000000000000000000000000000000000001 = 9007199254740994 (Number.MAX_SAFE_INTEGER + 3)
Remember, the format is base 2, and with that exponent the least significant bit is no longer fractional; it has a value of 2. It can be off (9,007,199,254,740,992) or on (9,007,199,254,740,994); so at this point, we've started to lose precision even at the whole number (integer) scale. Which has implications for our loop!
After completing the i = 9,007,199,254,740,992
loop, i++
gives us ... i = 9,007,199,254,740,992
again; there's no change in i
, because the next integer can't be stored and the calculation ends up rounding down. i
would change if we did i += 2
, but i++
can't change it. So we've reached steady-state: i
never changes, and the loop never terminates.
Here are the various relevant calculations:
if (!Number.MAX_SAFE_INTEGER) { // Browser doesn't have the Number.MAX_SAFE_INTEGER // property; shim it. Should use Object.defineProperty // but hey, maybe it's so old it doesn't have that either Number.MAX_SAFE_INTEGER = 9007199254740991; } var i = 0; console.log(i, 1/i, 1/i > 0); // 0, Infinity, true i++; console.log(i, 1/i, 1/i > 0); // 1, 1, true // ...eventually i is incremented all the way to Number.MAX_SAFE_INTEGER i = Number.MAX_SAFE_INTEGER; console.log(i, 1/i, 1/i > 0); // 9007199254740991 1.1102230246251568e-16, true i++; console.log(i, 1/i, 1/i > 0); // 9007199254740992 1.1102230246251565e-16, true i++; console.log(i, 1/i, 1/i > 0); // 9007199254740992 1.1102230246251565e-16, true (no change) console.log(i == i + 1); // true
The condition 1/i > 0
will always evaluate to true:
Initially it's true because 1/0
evaluates to Infinity
and Infinity > 0
is true
It stays true since 1/i > 0
is true for all i < Infinity
and i++
never reaches Infinity
.
Why does i++
never reach Infinity
? Due to the limited precision of the Number
datatype, there is a value for which i + 1 == i
:
9007199254740992 + 1 == 9007199254740992 // true
Once i
reaches that value (which corresponds to Number.MAX_SAFE_INTEGER
+ 1
), it will stay the same even after i++
.
We therefore have an infinite loop.
Why is 9007199254740992 + 1 == 9007199254740992
?
JavaScript's Number
datatype is actually an 64-bit IEEE 754 double precision float. Each Number
is disassembled and stored as three parts: 1-bit sign, 11-bit exponent, and 52-bit mantissa. Its value is -1 sign × mantissa × 2 exponent.
How is 9007199254740992 represented? As 1.0 × 2 53, or in binary:
Incrementing the mantissa's least significant bit, we get the next higher number:
The value of that number is 1.00000000000000022… × 2 53 = 9007199254740994
What does that mean? Number
can either be 9007199254740992 or 9007199254740994, but nothing in between.
Now, which one shall we chose to represent 9007199254740992 + 1? The IEEE 754 rounding rules give the answer: 9007199254740992.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With