Inspired by crytek's presentation on using quaternions to store tangent space in quaternions for smaller vertices, I came to the logical conclusion that if you can use quaternions to store tangent space, then you could also lerp quaternions between vertices and use them to rotate normals directly. This would eliminate the need to re-orthogonalize your tangent space vectors, or reconstruct one of them, and it would cut out a per-fragment matrix-vector multiplication, replacing it all with a single quaternion-vector multiplication.
I tried to implement it in my OpenGL app, using my home-made quaternion class, and I'm having some issues. I know that my quaternion can be constructed from a matrix, multiply the quaternion by a vector, and get the same result as multiplying the matrix with the vector - I've done so successfully on the cpu side. However, once I start working with them in GLSL, everything tends to go haywire.
It is very interesting to note that I can in fact discern the pattern of the normal map, so I think I'm on the right track. Unfortunately, it seems that my colors go haywire.
This is the quaternion math that I use in glsl:
vec4 multQuat(vec4 q1, vec4 q2)
{
return vec4(
(q1.w * q2.y) + (q1.y * q2.w) + (q1.x * q2.z) - (q1.z * q2.x),
(q1.w * q2.z) + (q1.z * q2.w) + (q1.y * q2.x) - (q1.x * q2.y),
(q1.w * q2.w) - (q1.x * q2.x) - (q1.y * q2.y) - (q1.z * q2.z),
(q1.w * q2.x) + (q1.x * q2.w) + (q1.z * q2.y) - (q1.y * q2.z)
);
}
vec3 rotateVector(vec4 quat, vec3 vec)
{
return vec + 2.0 * cross(quat.xyz, cross(quat.xyz, vec) + (quat.w * vec));
}
This is how it's passed from the vertex shader:
vQtangent = multQuat(inQtangent, quatView);
Where quatView is a quaternion made from the view matrix. This might be my issue, because the the code that generates this quaternion assumes that the matrix is orthogonal.
Finally, we calculate the bumped normal in the fragment shader:
vec3 calcBumpedNormal(void)
{
vec4 qtangent = normalize(vQtangent);
vec3 normal = texture2D(texNormal, vTexCoord).xyz;
normal = (normal * 2) - 1;
return normalize(rotateVector(qtangent, normal));
};
Here's how I calculate a quaternion from 3 vec3's (How I get the quaternion from the tbn vectors):
inline static quat fromMat3(const vec3& col0, const vec3& col1, const vec3& col2)
{
/* warning - this only works when the matrix is orthogonal and special orthogonal */
float w = sqrtf(1.0f + col0.x + col1.y + col2.z) / 2.0f;
return quat(
(col1.z - col2.y) / (4.0f * w),
(col2.x - col0.z) / (4.0f * w),
(col0.y - col1.x) / (4.0f * w),
w);
}
And here's how I calclate the quaternion from a mat4 (how I get the quatView from the view matix):
inline static quat fromMat4(const mat4& mat)
{
/* warning - this only works when the matrix is orthogonal and special orthogonal */
float w = sqrtf(1.0f + mat.m[0][0] + mat.m[1][1] + mat.m[2][2]) / 2.0f;
return quat(
(mat.m[1][2] - mat.m[2][1]) / (4.0f * w),
(mat.m[2][0] - mat.m[0][2]) / (4.0f * w),
(mat.m[0][1] - mat.m[1][0]) / (4.0f * w),
w);
}
I am aware that neither work with non-orthogonal matrices.
However, only the x and y of the normal are stored in the normal buffer, I reconstruct z in the light pass fragment shader using the sqrt trick. Because these normals are meant to be in view-space, the z component is always positive.
Unfortunately, my results are incorrect, and I don't know where to look. I can discern the pattern of the normal map, so something has to be right.
If anybody would let me know where my problem might be, or if they have experience doing this themselves, any advice is greatly appreciated.
Does your code work fine if you use the per-vertex quaternion only in the vertex shader (by transforming the light & camera vectors into the tangential space)? If it breaks only when you try to rotate the normal in the pixel shader, then your problem is quaternion interpolation (if not, then I've just wasted 20mins).
Quaternions are not in 1:1 relation with the ortho-normal matrices of selected handedness (I assume your handedness is fine, but you should verify that). If you multiply each of the quaternion components by -1
, you'll get the same transformation.
Now, your fromMat3
always produces a quaternion with positive W
component. Imagine how interpolation goes along an edge between (0.99,0,0,0.1)
and (-0.99,0,0,0.1)
. The X
component will travel all the way through its axis, causing all sorts of shading issues for you.
You've got to make sure that any quaternion interpolation (QI) is happening between quaternions belonging to the same hemisphere, i.e. dot(q1,q2) > 0
. It's easy to see how this check fails for the example quaternions I mentioned, and how it works fine if you multiply the second quaternion by -1
.
The tricky part is that ensuring QI-correctness may require splitting edges and adding new vertices, so it is best to be done on the exporter side, not during the model loading. Have a look at the KRI mesh exporter code for the reference.
I don't recommend you going there for practical reasons, unless you are very persistent. Instead, you can just happily use quaternions in the vertex shader. If you ever get a hand of GPU Pro 3 book, you can find my article on quaternions there, explaining the very same problem (and the solution) in detail.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With