I just started testing with the iPhone 5S and the 64bit architecture on an OpenGL ES app. The problem I'm seeing is that (CGFloat) values are way wrong when they get to the shaders. I pass in 0.8 and it changes to -1.58819e-23 when I debug the shader. I am using glUniform4fv() to pass in the value. Do I need to use a different data type or? or a different method to pass in the values? The value goes through fine when I test on 32bit
CGFloat brushColor[4];
brushColor[0] = 0.8;
brushColor[1] = 0.1;
brushColor[2] = 0.1;
brushColor[3] = 0.3;
glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);
(some of you may notice this is from the GLPaint demo...)
thanks,
austin
The official OpenGL ES Programming Guide for iOS has a small section dedicated to precision hints which you can refer to afterwards for optimization purposes, along with the iOS Device Compatibility Reference. The magic behind pixel shaders lies within gl_FragCoord.
Note: The term “graphics-intensive” is no joke in this tutorial. This app will safely push your iPhone’s GPU to its limit, so use an iPhone 5 or newer version. If you don’t have an iPhone 5 or later, the iOS simulator will work just fine. First, download the starter pack for this tutorial.
For the duration of this tutorial, a full green screen means your base shaders ( RWTBase.vsh and RWTBase.fsh) are in working order and your OpenGL ES code is set up properly. Throughout this tutorial, green means “Go” and red means “Stop”.
The magic behind pixel shaders lies within gl_FragCoord. This fragment-exclusive variable contains the window-relative coordinates of the current fragment. For a normal fragment shader, “this value is the result of fixed functionality that interpolates primitives after vertex processing to generate fragments”.
CGFloat
is a variable typedef. On a 32-bit build environment it is single-precision, on 64-bit it is double-precision. Normally this would not be a huge issue, but you are using glUniform4fv
, which takes a GLfloat *
.
OpenGL stipulates that GLfloat
is always a single-precision floating point value and compilers can deal with type demotion from double-precision to single-precision when you use the non-pointer version of this function. When you use pointers, this behavior does not occur - OpenGL expects to be passed an array of single-precision floats, but you pass it an array of double-precision floats with no type conversion.
What you need to do is stop using CGFloat
. Instead, use GLfloat
. OpenGL typedefs are provided to ensure this sort of thing never happens.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With