A lot of suggestions on improving the performance in iPhone games revolve around sending less data to the GPU. The obvious suggestion is to use GLshorts instead of GLfloat wherever possible, such as for vertices, normals, or texture coordinates.
What are the specifics when using a GLshort for a normal or a texture coordinate? Is it possible to represent a GLfloat texture coordinate of 0.5 when using a GLshort? If so, how do you do that? Would it just be SHRT_MAX/2? That is, does the range of 0 to 1 for a GLfloat map to 0 to SHRT_MAX when using a GLshort texture coordinate?
What about normals? I've always created normals with GLfloats and normalized them to unit length. When using a GLshort for a normal, are you sending a non-normalized vector to the GPU? If so, when and how is it normalized? By dividing by all components by SHRT_MAX?
The OpenGL ES 1.1 specification says that Normals are automatically brought back to the [-1:1] or [0:1] range when using integer types. (See Table 2.7 of specification for the full list of formulaes)
(for shorts) n_x = (2c +1)/(2^16 − 1)
So you don't need to rely on GL_NORMALIZE for normals (and can use whatever trick you want).
Texture coordinates however do not get scaled (values outside the [0:1] range are perfectly valid...). If you want to apply such a scaling, your best bet is to use a texture coordinate matrix, at a somewhat significant cost.
glMatrixMode(GL_TEXTURE);
glLoadMatrix(matrix_that_does_conversion_based_on_type);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With