I've been looking for an efficient way to process large lists of vectors in javascript. I created a suite of performance tests that perform in-place scalar-vector multiplication using different data structures:
AoS implementation:
var vectors = [];
//
var vector;
for (var i = 0, li=vectors.length; i < li; ++i) {
vector = vectors[i];
vector.x = 2 * vector.x;
vector.y = 2 * vector.y;
vector.z = 2 * vector.z;
}
SoA implementation:
var x = new Float32Array(N);
var y = new Float32Array(N);
var z = new Float32Array(N);
for (var i = 0, li=x.length; i < li; ++i) {
x[i] = 2 * x[i];
y[i] = 2 * y[i];
z[i] = 2 * z[i];
}
The AoS implementation is at least 5 times faster. This caught me by surprise. The AoS implementation uses one more index lookup per iteration than the SoA implementation, and the engine has to work without a guaranteed data type.
Why does this occur? Is this due to browser optimization? Cache misses?
On a side note, SoA is still slightly more efficient when performing addition over a list of vectors:
AoS:
var AoS1 = [];
var AoS2 = [];
var AoS3 = [];
//code for populating arrays
for (var i = 0; i < N; ++i) {
AoS3[i].x = AoS1[i].x + AoS2[i].x;
}
SoA:
var x1 = new Float32Array(N);
var x2 = new Float32Array(N);
var x3 = new Float32Array(N);
for (var i = 0; i < N; ++i) {
x3[i] = x1[i] + x2[i];
}
Is there a way I can tell when an operation is going to be more/less efficient for a given data structure?
EDIT: I failed to emphasize the SoA implementation made use of typed arrays, which is why this performance behavior struck me as odd. Despite having the guarantee of data type provided by typed arrays, the plain-old array of associative arrays is faster. I have yet to see a duplicate of this question.
EDIT2: I've discovered the behavior no longer occurs when the declaration for vector
is moved to preparation code. AoS is ostensibly faster when vector
is declared right next to the for
loop. This makes little sense to me, particularly since the engine should just anchor it to the top of the scope, anyways. I'm not going to question this further since I suspect an issue with the testing framework.
EDIT3: I got a response from the developers of the testing platform, and they've confirmed the performance difference is due to outer scope lookup. SoA is still the most efficient, as expected.
Structure due to use defined data type become slow in performance as access and searching of element is slower in Structure as compare to Array. On other hand in case of Array access and searching of element is faster and hence better in performance.
It also comes down to the size of your data. Generally it is faster to use object key value pairs when you have large amounts of data. For small datasets, arrays can be faster.
Both objects and arrays are considered “special” in JavaScript. Objects represent a special data type that is mutable and can be used to store a collection of data (rather than just a single value). Arrays are a special type of variable that is also mutable and can also be used to store a list of values.
The structure of the tests being used for benchmarking seem to have overlapped each other causing either undefined or undesired behavior. A cleaner test (https://www.measurethat.net/Benchmarks/Show/474/0/soa-vs-aos) shows little difference between the two, and has SOA executing slightly (30%) faster.
However, none of this matters to the bottom line when it comes to performance. This is an effort in micro-optimization. What you are essentially comparing is O(n) to O(n) with nuance involved. The small percent difference will not have an effect overall as O(n) is considered to be an acceptable time complexity.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With