I have a shader that ideally needs 28 bits of mantissa, though I can use less and degrade performance. How can I determine what the precision of 'highp' is in OpenGL ES? It's probably an FP24, with 16bits mantissa, but I cannot figure out for sure or how to ask OpenGL. Any ideas?
From the OpenGL ES Shading Language reference:
highp
- 16-bit, floating point range:
-2^62 to 2^62, integer range: -2^16 to 2^16 mediump
- 10 bit, floating
point range: -2^14 to 2^14, integer
range: -2^10 to 2^10 lowp
- 8 bit,
floating point range: -2 to 2,
integer range: -2^8 to 2^8In my testing on my line of expensive toys:
So I will not list all combinations exhaustively.
Also note that precision of ints is defined to be always 0.
lowp
to actually be able to represent a value reaching exactly 2
or -2
, I don't really know of a great way to test this, nor should we over-concern ourselves with these limitations, just use mediump
when this could ever be an issue)Here is an example of how you might query the values.
int range[2], precision;
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_HIGH_FLOAT, range, &precision);
NSLog(@"Fragment shader high precision float range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_MEDIUM_FLOAT, range, &precision);
NSLog(@"Fragment shader medium precision float range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_LOW_FLOAT, range, &precision);
NSLog(@"Fragment shader low precision float range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_HIGH_INT, range, &precision);
NSLog(@"Fragment shader high precision int range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_MEDIUM_INT, range, &precision);
NSLog(@"Fragment shader medium precision int range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_LOW_INT, range, &precision);
NSLog(@"Fragment shader low precision int range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_VERTEX_SHADER, GL_HIGH_FLOAT, range, &precision);
NSLog(@"Vertex shader high precision float range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_VERTEX_SHADER, GL_MEDIUM_FLOAT, range, &precision);
NSLog(@"Vertex shader medium precision float range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_VERTEX_SHADER, GL_LOW_FLOAT, range, &precision);
NSLog(@"Vertex shader low precision float range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_VERTEX_SHADER, GL_HIGH_INT, range, &precision);
NSLog(@"Vertex shader high precision int range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_VERTEX_SHADER, GL_MEDIUM_INT, range, &precision);
NSLog(@"Vertex shader medium precision int range: %d %d precision: %d", range[0], range[1], precision);
glGetShaderPrecisionFormat(GL_VERTEX_SHADER, GL_LOW_INT, range, &precision);
NSLog(@"Vertex shader low precision int range: %d %d precision: %d", range[0], range[1], precision);
It is not clear to me yet whether you can expect tangible performance improvements by choosing a lower-precision type (Even on some phones that are now 3 years old).
It's clear that the trend is towards convergence with desktop hardware as it can be seen that the recent GPU's have completely eliminated the 8 bit types and are recycling the mediump
ones for lowp
.
You want GetShaderPrecisionFormat to query the range and precision of of shader types
int range[2], precision;
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_HIGH_FLOAT, range, &precision);
will give you the range and precision of highp float.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With