I am currently in the process of learning OpenGL and GLSL to write a simple software that loads models, display them on the screen, transform them etc.
As a first stage, I wrote a pure-C++ program without using OpenGL. it works great, and it uses a Row-major matrix representation:
So for instance mat[i][j] means row i and column j.
class mat4
{
vec4 _m[4]; // vec4 is a struct with 4 fields
...
}
This is the relevant matrix multiplication method:
mat4 operator*(const mat4& m) const
{
mat4 a(0.0);
for (int i = 0; i < 4; ++i)
{
for (int j = 0; j < 4; ++j)
{
for (int k = 0; k < 4; ++k)
{
a[i][j] += _m[i][k] * m[k][j];
}
}
}
return a;
}
In order to get from model space to clip space I do as follows in C++:
vec4 vertexInClipSpace = projectionMat4 * viewMat4 * modelMat4 * vertexInModelSpace;
Now, trying to implement that in a GLSL Shader (Version 1.5) yields weird results. It works, but only if I post multiply the vertex instead of pre-multiplying it and in addition transpose each of the matrices.
uniform mat4 m;
uniform mat4 v;
uniform mat4 p;
void main()
{
// works ok, but using post multiplication and transposed matrices :
gl_Position = vec4(vertex, 1.0f) * m * v * p;
}
Although mathematically OK as v2 = P * V * M * v1
is the same as transpose(v2) = transpose(v1) * transpose(M) * transpose(V) * transpose(P)
,
I obviously don't get something because I have not seen even 1 reference where they post multiply a vertex in the vertex shader.
To sum up, here are specific questions:
Links to related Questions:
link 1
link 2
Problem was sort of "solved" by altering the "transpose" flag in the call to:
glUniformMatrix4fv(
m_modelTransformID,
1,
GL_TRUE,
&m[0][0]
);
Now the multiplication in the shader is a pre-multiplication:
gl_Position = MVP * vec4(vertex, 1.0f);
Which kind of left me puzzled as the mathematics doesn't make sense for a column-major matrices that are a transpose of row major matrices.
could someone please explain?
Citing OpenGL faq:
For programming purposes, OpenGL matrices are 16-value arrays with base vectors laid out contiguously in memory. The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification.
Column-major versus row-major is purely a notational convention. Note that post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. The OpenGL Specification and the OpenGL Reference Manual both use column-major notation. You can use any notation, as long as it's clearly stated.
About some conventions:
Row vs Column Vector
Multiply 2 matrices is possible only if the number of columns of the left matrix is equal to the number of rows of the right matrix.
MatL[r1,c] x MatR[c,r2]
So, if you are working on a piece of paper, considering that a vector is a 1 dimensional matrix, if you want to multiply a 4vec for 4x4matrix then the vector should be:
Into a computer you can consider 4 consecutive values either as a column or a row (there's no concept of dimension), so you can post-multiply or pre-multiply a vector for the same matrix. Implicitly you are sticking with one of the 2 conventions.
Row Major vs Column Major layout
Computer memory is a continuous space of locations. The concept of multiple dimensions doesn't exist, it's a pure convention. All matrix elements are stored continuously into a one dimensional memory.
If you decide to store a 2 dimensional entity, you have 2 conventions:
Incidentally, transposing the elements of a matrix stored in row major, it's equivalent to store its elements in column major order. That implies, that swapping the order of the multiplication between a vector and a matrix is equivalent to multiply the same vector in the same order with the transposed matrix.
Open GL
It doesn't officially prescribes any convention, as stated above. I suggest you to look at OpenGL convention as if the translation is stored in the last column and the matrix layout is column major.
Why does this works? is it even legal to post multiply in glsl?
It is legal. As far as you are consistent across you code, either convention/multiplication order is fine.
How can I pass my C++ matrices so that they work properly inside the shader?
If you are using 2 different convention in C++ and in the shader, than you can either transpose the matrix and keep the same multiplication order, or don't transpose the matrix and invert the multiplication order.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With