Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do you use camera space instead of model space for normals?

Tags:

c++

math

opengl

I am learning OpenGL graphics, and am getting into shadows. The tutorials that I am reading are telling me to transform my normals and light vector to camera space. Why is this? Why can't you just keep the coords in model space?

A follow up question to this is how to handle model transformations. I am unable to find a definitive answer. I currently have this code:

vec3 normCamSpace = normalize(mat3(V) * normal);"
vec3 dirToLight = (V*vec4(lightPos, 0.0)).xyz;"
float cosTheta = clamp(dot(normCamSpace, dirToLight),0,1);"

V is the view matrix, or the camera matrix. I am unsure how to move or edit the light when the model changes in position, rotation, and scale.

like image 355
user975989 Avatar asked Jul 06 '13 05:07

user975989


2 Answers

The main reason is, that usually your light positions will not be given in model space, but world space. However for illumination to work efficiently all calculations must happen in a common space. In your usual transformation chain, model local coordinates are transformed by the modelview matrix directly into view space

p_view = MV · p_local

Since you normally have only one modelview matrix it would be cumbersome to separate this steap into something like

p_world = M · p_local
p_view  = V · p_world

For that you required MV to be separated.

Since the projection transformation traditionally happens as a separate step, view space is the natural "common lower ground" on which illumination calculation to base on. It just involves transforming transforming your light positions from world to view space, and since light positions are not very complex, this is done on the CPU and the pretransformed light positions given as shader.

Note that nothing is stopping you from performing illumination calculations in world space, or model local space. It just takes transforming the light positions correctly.

like image 191
datenwolf Avatar answered Sep 22 '22 03:09

datenwolf


I am learning OpenGL graphics, and am getting into shadows. The tutorials that I am reading are telling me to transform my normals and light vector to camera space. Why is this? Why can't you just keep the coords in model space?

Actually if you're the one writing the shader, you can use whatever coordinate space you want. IMO calculating lighting in world space feels more "natural", but that's matter of taste.

However, there are two small details:

  1. You cannot "naturally" calculate lighting in object space, if your object is a skinned mesh (character model animated by bones). Such model will require world space or view space. If your object can be only translated and rotated (affine transforms only), then lighting can be easily calculated in model/object space. I think some game engines actualy worked this way.
  2. If you use camera space, you can drop one subtraction when calculating specular highlights. Blinn/phong specular models require vector to(or from) eye to calculate specular factor. In camera space vector from eye to point is equal to point position. This is a very small optimization and it probably isn't worth the effort.
like image 27
SigTerm Avatar answered Sep 21 '22 03:09

SigTerm