Why are TypedArrays not faster than usual arrays? I want to use precalculate values for CLZ (compute leading zeros function). And I don't want they interpreting as usual objects?
http://jsperf.com/array-access-speed-2/2
Preparation code:
Benchmark.prototype.setup = function() {
var buffer = new ArrayBuffer(0x10000);
var Uint32 = new Uint32Array(buffer);
var arr = [];
for(var i = 0; i < 0x10000; ++i) {
Uint32[i] = (Math.random() * 0x100000000) | 0;
arr[i] = Uint32[i];
}
var sum = 0;
};
Test 1:
sum = arr[(Math.random() * 0x10000) | 0];
Test 2:
sum = Uint32[(Math.random() * 0x10000) | 0];
PS: May be my perf tests are invalid feel free to correct me.
The array does not contain holes (yeah, holes are fast but…). The array is preallocated. So creating and initialization typed array is fast.
JavaScript typed arrays are array-like objects that provide a mechanism for reading and writing raw binary data in memory buffers. Array objects grow and shrink dynamically and can have any JavaScript value.
A typed array significantly simplifies the level of proof the engine needs to be able to optimise around it. A value returned from a typed array is certainly of a certain type, and engines can optimise for the result being that type.
The ArrayBuffer object is used to represent a generic, fixed-length raw binary data buffer. It is an array of bytes, often referred to in other languages as a "byte array".
Modern engines will use true arrays behind the scenes even if you use Array
if they think they can, falling back on property map "arrays" if you do something that makes them think they can't use a true array.
Also note that as radsoc points out, var buffer = new ArrayBuffer(0x10000)
then var Uint32 = new Uint32Array(buffer)
produces a Uint32 array whose size is 0x4000 (0x10000 / 4), not 0x10000, because the value you give ArrayBuffer
is in bytes, but of course there are four bytes per Uint32Array entry. All of the below uses new Uint32Array(0x10000)
instead (and always did, even prior to this edit) to compare apples with apples.
So let's start there, with new Uint32Array(0x10000)
: http://jsperf.com/array-access-speed-2/11 (sadly, JSPerf has lost this test and its results, and is now offline entirely)
That suggests that because you're filling the array in a simple, predictable way, a modern engine continues to use a true array (with the performance benefits thereof) under the covers rather than shifting over. We see basically the same performance for both. The difference in speed could relate to type conversion taking the Uint32
value and assigning it to sum
as a number
(though I'm surprised if that type conversion isn't deferred...).
Add some chaos, though:
var Uint32 = new Uint32Array(0x10000);
var arr = [];
for (var i = 0x10000 - 1; i >= 0; --i) {
Uint32[Math.random() * 0x10000 | 0] = (Math.random() * 0x100000000) | 0;
arr[Math.random() * 0x10000 | 0] = (Math.random() * 0x100000000) | 0;
}
var sum = 0;
...so that the engine has to fall back on old-fashioned property map "arrays," and you see that typed arrays markedly outperform the old-fashioned kind: http://jsperf.com/array-access-speed-2/3 (sadly, JSPerf has lost this test and its results)
Clever, these JavaScript engine engineers...
The specific thing you do with the non-array nature of the Array
array matters, though; consider:
var Uint32 = new Uint32Array(0x10000);
var arr = [];
arr.foo = "bar"; // <== Non-element property
for (var i = 0; i < 0x10000; ++i) {
Uint32[i] = (Math.random() * 0x100000000) | 0;
arr[i] = (Math.random() * 0x100000000) | 0;
}
var sum = 0;
That's still filling the array predictably, but we add a non-element property (foo
) to it. http://jsperf.com/array-access-speed-2/4 (sadly, JSPerf has lost this test and its results) Apparently, engines are quite clever, and keep that non-element property off to the side while continuing to use a true array for the element properties:
I'm at a bit of a loss to explain why standard arrays should get faster there compared to our first test above. Measurement error? Vagaries in Math.random
? But we're still pretty sure the array-specific data in the Array
is still a true array.
Whereas if we do the same thing but fill in reverse order:
var Uint32 = new Uint32Array(0x10000);
var arr = [];
arr.foo = "bar"; // <== Non-element property
for (var i = 0x10000 - 1; i >= 0; --i) { // <== Reverse order
Uint32[i] = (Math.random() * 0x100000000) | 0;
arr[i] = (Math.random() * 0x100000000) | 0;
}
var sum = 0;
...we get back to typed arrays winning out — except on IE11: http://jsperf.com/array-access-speed-2/9 (sadly, JSPerf has lost this test and its results)
var buffer = new ArrayBuffer(0x10000);
var Uint32 = new Uint32Array(buffer);
is not the same thing as:
var Uint32 = new Uint32Array(0x10000);
not because of the new ArrayBuffer
(you always get an array buffer: see Uint32.buffer
in both cases) but because of the length parameter: with ArrayBuffer
you have 1 byte per element, with Uint32Array
you have 4 bytes per element.
So, in the first case (and in your code), Uint32.length = 0x1000/4
and your loops are out of bounds 3 out of 4 times. But sadly you will never get errors, only poor performances.
Using new ArrayBuffer
, you have to declare Uint32 like this:
var buffer = new ArrayBuffer(0x10000 * 4);
var Uint32 = new Uint32Array(buffer);
See jsperf with (0x10000) and jsperf with (0x10000 * 4).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With