I want to display mesh models in OpenGL ES 2.0, where it clearly shows the actual mesh, so I don't want smooth shading across each primitive/triangle. The only two options I can think about are
Does it have to be like this? Why can't I simply read in primitives and don't specify any normals and somehow let OpenGL ES 2.0 make a flat shade on each face?
Similar question Similar Stackoverflow question, but no suggestion to solution
A normal vector (or normal, for short) is a vector that points in a direction that's perpendicular to a surface. For a flat surface, one perpendicular direction is the same for every point on the surface, but for a general curved surface, the normal direction might be different at each point on the surface.
Vertex normals (sometimes called pseudo-normals) are values stored at each vertex that are most commonly used by a renderer to determine the reflection of lighting or shading models, such as phong shading. For example, the normal of a surface is used to calculate which direction light reflects off this surface.
Normals play a central role in shading. Everybody knows that an object becomes brighter if we orient it towards a light source. The orientation of an object surface plays an important role in the amount of light it reflects (and thus how bright it looks like).
A face normal is a vector that defines which way a face is pointing. The direction that the normal points represents the front, or outer surface of the face. To compute the face normal of a face in a mesh you simply take the cross product of two edge vectors of the face.
Because in order to have shading on your mesh (any, smooth or flat), you need a lighting model, and OpenGL ES can't guess it. There is no fixed pipeline in GL ES 2 so you can't use any built-in function that will do the job for you (using a built-in lighting model).
In flat shading, the whole triangle will be drawn with the same color, computed from the angle between its normal and the light source (Yes, you also need a light source, which could simply be the origin of the perspective view). This is why you need at least one normal per triangle.
Then, a GPU works in a very parallelized way, processing several vertices (and then fragments) at the same time. To be efficient, it can't share data among vertices. This is why you need to replicate normals for each vertex.
Also, your mesh can't share vertices among triangles anymore as you said, because they share only the vertex position, not the vertex normal. So you need to put 3 * NbTriangles
vertices in you buffer, each one having one position and one normal. You can't either have the benefit of using triangle strips/fans, because none of your faces will have a common vertex with another one (because, again, different normals).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With