Looking through the dom.js source from the Closure library I found this (in goog.dom.getElementsByTagNameAndClass_
):
if (opt_class) {
var arrayLike = {};
var len = 0;
for (var i = 0, el; el = els[i]; i++) {
var className = el.className;
// Check if className has a split function since SVG className does not.
if (typeof className.split == 'function' &&
goog.array.contains(className.split(' '), opt_class)) {
arrayLike[len++] = el;
}
}
arrayLike.length = len;
return arrayLike;
}
What would be the benefit of doing this over a regular array?
In this case, my guess would be that arrayLike[len++] = el
is an optimization over actualArray.push(el)
. However, after doing a simple benchmark (code provided below results), it appears that this method is actually slower than using a standard array using the push
method as well as with same construction technique.
Results (from OS X 10.5.8, FF 3.5.6) *:
push construction: 199ms (fastest)
indexed construction: 209ms
associative construction: 258ms (slowest)
In conclusion, why Closure is using an associative array in this case is beyond me. There may likely be a reason (for instance, this technique may perform better in Chrome, or less dubiously, this technique may perform better in future releases of JavaScript engines in general), but I don't see a good reason in this case.
* A mean was not provided because the times varied from test run to test run, but consistently resulted in the same order. If you're interested, you can carry it out on your own.
Benchmark code:
var MAX = 100000, i = 0,
a1 = {}, a2 = [], a3 = [],
value = "";
for ( i=0; i<1024; ++i ) {
value += "a";
}
console.time("associative construction");
for ( i=0; i<MAX; ++i ) {
a1[i] = value;
}
a1.length = i;
console.timeEnd("associative construction");
console.time("push construction");
for ( i=0; i<MAX; ++i ) {
a2.push(value);
}
console.timeEnd("push construction");
console.time("indexed construction");
for ( i=0; i<MAX; ++i ) {
a3[i] = value;
}
console.timeEnd("indexed construction");
The size and type of value
is insignificant to the test as JavaScript uses copy-on-write. A large (1kb) value
was used for the purpose of convincing those readers who are not familiar with this feature of JavaScript.
The author of the code used empty JavaScript object as a basis of a array like object, i.e. the one that can be accessed by index and has a length property.
There could be two reasons that I could think of:
capacity - length
of memoryI'm betting that similar code would be found in other JavaScript libraries, and that it's result of benchmarking and finding the best to fit solution across different browsers.
edited after comment by Justin
Upon further googling it appears that array-like objects are common among JavaScript developers: checkout JavaScript: the definitive guide by David Flanagan, it has a whole sub-chapter on Array-like objects. Also these guys mention them.
No mention of why should one prefer array-like vs array object. This could be a good SO question.
So a third option could be the key: compliance with norms of JavaScript API's.
I think this example creates an array-like object instead of a real array, because other DOM methods also return array-like objects (NodeList).
Consistently using "array-likes" in the API forces the developer to avoid using the array-specific methods (using goog.array
instead), so there are fewer gotchas when someone later decides to change a getElementsByTagNameAndClass call to, say, getElementByTagName.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With