While investigating closures in javascript I came up with the little example below and I don't really understand what's going on.
I was hoping to play with the garbage collector assuming that declaring variables with var inside a function in a tight loop would cause tons of allocations an deallocations. I was trying to avoid this by putting my var declarations in the parent scope of a closure and was expecting that the closured function would be faster. However bad this idea might be in the first place I stumbled upon this litte problem.
var withClosure = function() {
var a, b, c, d, e, f, g;
return function () {
a = 1;
b = 2;
c = 3;
d = 4;
e = 5;
f = 6;
g = 7;
};
}();
var withoutClosure = function () {
var a = 1;
var b = 2;
var c = 3;
var d = 4;
var e = 5;
var f = 6;
var g = 7;
};
console.time("without");
for (var i = 0; i < 1000000000; i++) {
withoutClosure();
}
console.timeEnd("without");
console.time("withcsr");
for (var i = 0; i < 1000000000; i++) {
withClosure();
}
console.timeEnd("withcsr");
/*
Output on my machine:
without: 1098.329ms
withcsr: 8878.812ms
Tested with node v.6.0.0 and Chrome 50.0.2661.102 (64-bit)
*/
The fact that I assign to the variables in the parent scope makes the closure run 8 times slower than the normal version on my machine. Using more variables makes it worse. If I just read the variables instead of assigning to them the problem isn't there.
What causes this? Can someone explain?
In the example without the closure any decent Javascript engine will realise that the variables within the function are initialised but never read before going out of scope and thus can be removed without affecting the output of the function.
In the example with the closure the variables remain in scope and thus can't be optimised out.
This talk explains in depth some of the optimisations JIT Javascript compilers make: https://www.youtube.com/watch?v=65-RbBwZQdU
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With