I stumbled upon this issue with parseInt
and I'm not sure why this is happening.
console.log(parseInt("16980884512690999")); // gives 16980884512691000
console.log(parseInt("169808845126909101")); // gives 169808845126909100
I clearly not hitting any number limits in JavaScript limits
(Number.MAX_VALUE = 1.7976931348623157e+308
)
Running Win 7 64 bit if that matters.
What am I overlooking?
Fiddle
parseInt() is an easy way to parse a string value and return a rounded integer. If you're working with a special use-case and require a different numerical base, modify the radix to your choosing.
In order to facilitate this operation, javascript also provides four methods, namely: round(), ceil(), floor() and parseInt(). The round() is used to round to integer, ceil() is used to round up, floor() is used to round down, and parseInt() is used to only take the integer part.
parseInt() doesn't always correctly convert to integer In JavaScript, all numbers are floating point. Integers are floating point numbers without a fraction. Converting a number n to an integer means finding the integer that is “closest” to n (where “closest” is a matter of definition).
parseFloat( ) parseFloat() is quite similar to parseInt() , with two main differences. First, unlike parseInt() , parseFloat() does not take a radix as an argument. This means that string must represent a floating-point number in decimal form (radix 10), not octal (radix 8) or hexadecimal (radix 6).
Don't confuse Number.MAX_VALUE with maximum accurate value. All numbers in javascript are stored as 64 bit floating point, which means you can get high (and low) numbers, but they'll only be accurate to a certain point.
Double floating points (i.e. Javascript's) have 53 bits of significand precision, which means the highest/lowest "certainly accurate" integer in javascript is +/-9007199254740992 (2^53). Numbers above/below that may turn out to be accurate (the ones that simply add 0's on the end, because the exponent bits can be used to represent that).
Or, in the words of ECMAScript: "Note that all the positive and negative integers whose magnitude is no greater than 2^53 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0)."
Update
Just to add a bit to the existing question, the ECMAScript spec requires that if an integral Number has less than 22 digits, .toString()
will output it in standard decimal notation (e.g. 169808845126909100000
as in your example). If it has 22 or more digits, it will be output in normalized scientific notation (e.g. 1698088451269091000000
- an additional 0 - is output as 1.698088451269091e+21
).
From this answer
All numbers in Javascript are 64 bit "double" precision IEE754 floating point.
The largest positive whole number that can therefore be accurately represented is 2^53. The remaining bits are reserved for the exponent.
2^53 = 9007199254740992
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With