I'm learning to use normal maps (per pixel lighting?) in 2D graphics with OpenGL.
New to normal mapping, I managed to wrap my head around the Sobel operator and the generation of normal maps (mostly thanks to this), that is creating a (2D) array of normals from a (2D) array of pixel data.
(Most of the tutorials and forum threads that I have found were specific to 3D uses and modelling software. I aim to implement this functionality myself, in C++.)
With OpenGL, you can specify a normal for each polygon or for each vertex. Vertices of the same polygon might share the same normal (for a flat surface) or have different normals (for a curved surface). You can't assign normals anywhere other than at the vertices.
Recognizing exported normal maps OpenGL normal maps can be recognized just by looking. Notice that on the OpenGL normal map it looks as if the light shining from top right, while on DirectX it's from the bottom right direction.
I recommend you look at:
This nvidia presentation on bumb mapping
I haven't looked at this for a while, but I remember it going over most of the details in implementing a bump map shader, should get a few ideas running.
This other nvidia tutorial for implementing bump mapping in the cg shader langauge
This bump mapping tutorial might also be helpful.
I know all these are not for full normal mapping but they're a good start.
Also, while there are differences in shader languages it shouldn't be to hard to convert formulers between them if you want to use GLSL.
As ybungalobill said, you can do it without shaders but unless you are working on an educational project (for your education) or a particular embedded device, I have no idea why the hell you would want to - but if you do need to this is where you want to look, it was written before shaders, and updated to reference them later.
- What do I do once I've got the normal map?
- Do I need to register it with OpenGL?
Yes, you need to load it as a texture.
- Does it need to be associated with the texture, if yes, how is it done?
If you mean associated with the color texture, then no. You need to create a texture that holds the normal map in order to use it later with OpenGl.
- How is it mapped to a 2D textured quad?
Your normal map is just another texture, you bind it and map as any other texture.
Normal map stores the normals in tangent space coordinates, so to calculate the lighting per pixel you need to know the relative position of the light source in tangent space coordinate system. This is done by setting additional parameters per vertex (normal, tangent, binormal), calculating the light-source position in tangent space coordinates and interpolating this position along the triangles. In the fragment shader you lookup the normal in the normal map and perform the desired lighting calculation based on the interpolated parameters.
- (Is this something that I can do without shaders / GLSL?)
Yes, you can use some legacy extensions to program the multi-texture environment combination functions. Never done it myself but it looks like hell.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With