I am reading Douglas Crockford's book - Javascript the good parts - and he says:
JavaScript has a single number type. Internally, it is represented as 64-bit floating point, the same as Java’s double. Unlike most other programming languages, there is no separate integer type, so 1 and 1.0 are the same value. This is a significant convenience because problems of overflow in short integers are completely avoided...
I am not overly familiar with other languages so would like a bit of an explanation. I can understand why a 64 bit helps but his statement seems to apply to the lack of floats and doubles.
What would be (pseudo code perhaps) an example of a short integer overflow situation that wont occur in JS?
Avoidance. By allocating variables with data types that are large enough to contain all values that may possibly be computed and stored in them, it is always possible to avoid overflow.
We can multiply recursively to overcome the difficulty of overflow. To multiply a*b, first calculate a*b/2 then add it twice. For calculating a*b/2 calculate a*b/4 and so on (similar to log n exponentiation algorithm).
In JavaScript, number type is 64 bit IEEE 754 floating point number which is not an integer. So it don't follow common patterns of integer overflow / underflow behavior in other languages.
Use a Different Data Type. If we want to allow values larger than 2147483647 (or smaller than -2147483648), we can simply use the long data type or a BigInteger instead. Though variables of type long can also overflow, the minimum and maximum values are much larger and are probably sufficient in most situations.
Suppose you had an 8 bit unsigned number.
Here are a selection of digital and binary representations:
1: 00000001
2: 00000010
15: 00001111
255: 11111111
If you have 255 and add 1, what happens? There's no more bits, so it wraps around to
0: 00000000
Here's a demonstration in C# using uint (an unsigned 32-bit integer)
using System;
public class Program
{
public static void Main()
{
uint n = 4294967294;
for(int i = 0; i < 4; ++i)
{
n = n + 1;
Console.WriteLine("n = {0}", n);
}
}
}
This will output:
n = 4294967294
n = 4294967295
n = 0
n = 1
This is the problem you don't get in javascript.
For example:
var n = 9007199254740991;
var m = n + 1;
var p = m + 1;
alert('n = ' + n + ' and m = ' + m + ' and p = ' + p);
You will see:
n = 9007199254740991 and m = 9007199254740992 and p = 9007199254740992
Rather than wrapping around, your number representations will shed accuracy.
Note that this 'shedding accuracy' behavior is not unique to javascript, it's what you expect from floating-point data types. Another .NET example:
using System;
public class Program
{
public static void Main()
{
float n = 16777214; // 2^24 - 2
for(int i = 0; i < 4; ++i)
{
Console.WriteLine(string.Format("n = {0}", n.ToString("0")));
Console.WriteLine("(n+1) - n = {0}", (n+1)-n);
n = n + 1;
}
}
}
This will output:
n = 16777210
(n+1) - n = 1
n = 16777220
(n+1) - n = 1
n = 16777220
(n+1) - n = 0
n = 16777220
(n+1) - n = 0
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With