Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Better to multiply matrices in javascript or a shader?

I've been looking at several webgl examples. Consider MDN's tutorial. Their vertex shader multiplies the vertex by a perspective matrix and a world position matrix:

gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);

But the uMVMatrix is itself the product of several transforms (translation, rotation, etc) calculated in javascript with the help of some matrix libraries.

It seems like it would be faster to calculate their products directly in the shader; surely that's faster than doing it in .js. Is there some reason why they choose this approach?

Now, I guess you can stack an arbitrary number of transforms in an arbitrary order this way, which is more flexible. But say that flexibility isn't needed, is there any reason to avoid doing transforms directly in the shaders? Something like

gl_Position = uPMatrix * uRotationMatrix * uScaleMatrix * uTranslationMatrix * vec4(aVertexPosition, 1.0);

e: To add some context, in my particular case I'll only be rendering 2D rectangular entities (mostly sprites), so the number of vertices will always just be 4.

Given the overhead of bringing in a library to do fast .js matrix multiplication, seems like pushing these calculations into the shader is definitely the way to go for my personal case.

(Plus, even if it were slower in the balance than doing it in javascript, shunting the calculations into the GPU is probably worth something in and of itself!)

like image 680
starwed Avatar asked Jan 08 '14 00:01

starwed


1 Answers

it depends ....

If you do it in the shader it's done for either every vertex (vertex shader) or every pixel (fragment shader). Even a GPU does not have infinite speed so let's say you are drawing 1million vertices. It's likely 1 set of matrix math calculations in JavaScript vs 1 million matrix calculations in on the GPU, the JavaScript will win.

Of course your milage may very. Every GPU is different. Some GPUs are faster than others. Some drivers do vertex calculations on the CPU. Some CPUs are faster than others.

You can test, unfortunately since you are writing for the web you have no idea what browser the user is running, nor what CPU speed or GPU or driver etc. So, it really depends.

On top of that, passing matrices to the shader is also a non-free operation. In other words it's faster to call gl.uniformMatrix4fv once than the 4 times you show in your example. If you were drawing 3000 objects whether 12000 calls to gl.uniformMatrix4fv (4 matrices each) is significantly slower than 3000 calls (1 matrix each) is something you'd have to test.

Further, the browsers teams are working on making math through JavaScript for matrices faster and trying to get it closer to C/C++.

I guess that means there is no right answer except to test and those results will be different for every platform/browser/gpu/drivers/cpu.

like image 115
gman Avatar answered Oct 10 '22 21:10

gman