Having followed the instructions of the tutorial https://www.youtube.com/watch?v=yc0b5GcYl3U (How To Unwrap A UV Sphere In Blender) I succeeded in generating a textured sphere within the blender program.
Now I want it in my openGL C++ program. To this end I followed the tutorial http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Load_OBJexported in order to save the sphere as an .obj file (using the triangulation export option as stated in said tutorial) and joyfully found a lot of 'v', 'vt', and 'f' lines within the result.
However, parsing the file I found 642 vertices (v), 561 'texture vertices' (vt)[, and 1216 elements lines (f) of the expected structure 'f a/at b/bt c/ct'].
What baffles me is this: My naive understanding of openGL tells me that each point on a textured object has a site in space (the vertex) and a site on the texture (the UV point). Hence I really would expect that the numbers of vs and vts matche. But they do not: 642!=561. How can that be?
Because OBJ and OpenGL use a different definition of "vertex", and handle indices differently.
In the following explanation, I'll call the coordinates of a vertex, which are the values in the v
records of the OBJ format, "positions".
The main characteristic of the OBJ vertex/index model is that it uses separate indices for different vertex attributes (positions, normals, texture coordinates).
This means that you can have independent lists of positions and texture coordinates, with different sizes. The file only needs to list each unique position once, and each unique texture coordinate pair once.
A vertex is then defined by specifying 3 indices: One each for the position, the texture coordinates, and the normal.
OpenGL on the other hand uses a single set of indices, which reference complete vertices.
A vertex is defined by its position, texture coordinates, and normal. So a vertex is needed for each unique combination of position, texture coordinates, and normal.
When you read an OBJ file for OpenGL rendering, you need to create a vertex for each unique combination of position, texture coordinates, and normal. Since they are referenced by indices in the f
records, you need to create an OpenGL vertex for each unique index triplet you find in those f
records. For each of these vertices, you use the position, texture coordinates, and normals at the given index, as read from the OBJ file.
My older answer here contains pseudo-code to illustrate this process: OpenGL - Index buffers difficulties.
A wavefront obj file builds faces (f) by supplying indices to texture coordinates (vt), vertices (v) and normals (vn). If multiple faces share data, they simple use the same index rather than duplicating vt, v or vn data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With