Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenGL floating point precision

I am working on a project to migrate from legacy/immediate OpenGL to modern OpenGL (core profile). In the legacy OpenGL implementation, we used double precision, i.e. we made use of GL_DOUBLE in glVertexPointer and we used glLoadMatrixd, etc. However, in modern OpenGL, only OpenGL/GLSL 4.0+ supports double precision.

How does old OpenGL support double precision while newer OpenGL doesn't? How does legacy OpenGL handle double precision while most graphics cards don't even support double precision?

I couldn't find any information on the internet or some answers were too old.

Do current graphics cards support double precision? Should I use OpenGL 4.0?

like image 897
Tao Chen Avatar asked Jan 26 '23 10:01

Tao Chen


1 Answers

How does old OpenGL support double precision while newer OpenGL doesn't? How does legacy OpenGL handle double precision while most graphics cards don't even support double precision?

It doesn't what you think it does. That you can use doubles as vertex attributes does not imply (and never did) that the GL implementation will do calculations with that precisions - GPUs never did that. The data is converted to single precision in any case. Same for the matrices - the matrices which were sent to the GPU were never double precision in legacy GL, no matter if you used d or f suffix when specifying the data CPU-side.

Just with more recent GPUs, double precision became available in the shader cores, and modern GL 4.x allows you to use that for the first time.

Also note that lots of real-world consumer GPUs have some insane low double precison ratio like 1/32 of the FP32 performance, so you better think twice before using that. For graphics, you seldom will need that. And the fact that your previous code worked with legacy GL proved that you did not need the precision.

like image 142
derhass Avatar answered Jan 31 '23 20:01

derhass