I run this test in different node versions:
function test() { var i; var bigArray = {}; var start = new Date().getTime(); for (i=0; i<100000; i+=1) { bigArray[i] = {}; var j= Math.floor(Math.random() * 10000000); bigArray[i]["a" + j] = i.toString(32); if (i % 1000 === 0) console.log(i); } var end = new Date().getTime(); var time = end - start; console.log('Execution time: ' + time); } test();
As you can see, it just creates an object with 100000 fields where each field is just an object with just one field. The key of this inner object is forced to be alphanumeric (if the key is numeric, it performs normal).
When I run this test in different javascript implementations/versions I get this results:
v0.8.28 -> 2716 ms v0.10.40 -> 73570 ms v0.12.7 -> 92427 ms iojs v2.4.0 -> 510 ms chrome -> 1473 ms
I have also tried to run this test in an asynchronous loop (each loop step in in a different tick), but the results are similar to the ones showed above.
I can't understand why this test is so expensive in newer node versions. Why is it so slow? Is there any special v8 flag that can improve this test?
In order to handle large and sparse arrays, there are two types of array storage internally:
It's best not to cause the array storage to flip from one type to another.
Therefore:
Source and more info: http://www.html5rocks.com/en/tutorials/speed/v8/
PS: this is supposed to improve considerably in the upcoming node.js+io.js version.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With