Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does OpenGL use float rather than double? [closed]

Tags:

opengl

I am wondering why OpenGL makes use of float rather than double in its functions. Double should be much more accurate than float.

like image 202
user705414 Avatar asked Apr 04 '12 14:04

user705414


People also ask

Why do we typically use float instead of double?

float is mostly used in graphic libraries for high processing power due to its small range. double is mostly used for calculations in programming to eliminate errors when decimal values are being rounded off. Although float can still be used, it should only be in cases when we're dealing with small decimal values.

Does OpenGL use double precision?

Unfortunately it seems that OpenGL doesn't support passing uniforms with double precision.

Why are floats faster than doubles?

Floats are faster than doubles when you don't need double's precision and you are memory-bandwidth bound and your hardware doesn't carry a penalty on floats. They conserve memory-bandwidth because they occupy half the space per number. There are also platforms that can process more floats than doubles in parallel.

Can we use double instead of float?

It's legal for double and float to be the same type (and it is on some systems). That being said, if they are indeed different, the main issue is precision. A double has a much higher precision due to it's difference in size. If the numbers you are using will commonly exceed the value of a float, then use a double.


1 Answers

In the past, many OpenGL functions did have a double variant. glMultMatrix for example has f and d variations. Most of these don't exist anymore, but that has nothing to do with float vs. double. glMultMatrixd and glMultMatrixf are gone in core GL 3.1 and above.

In core OpenGL, there are still functions that have double variants. glDepthRange takes double, though there is a float version (introduced mainly for GL ES compatibility). There are some functions that don't have double variants, like glBlendColor.

Sometimes, OpenGL is just being inconsistent. Other times, it is simply following a reasonable principle: not lying to the user.

Take glBlendColor. If you could pass it double-precision values, that would imply that floating-point blending took place with double-precision accuracy. Since it most certainly does not (on any hardware that exists), providing an API that offers that accuracy is a tacit lie to the user. You're taking high-precision values to a low-precision operation. Though the same logic is true of glDepthRange (no double-precision depth buffers are not available), yet it takes doubles. So again, inconsistency.

The glUniform* suite of functions is a much better example. They set state into the current program object. Until GL 4.0, the double versions did not exist. Why? Because that would have been a lie. GLSL pre-4.0 did not allow you to declare a double, for the simple and obvious reason that no pre-4.0 hardware could implement it. There's no point in allowing the user to create a double if the hardware couldn't handle it.

like image 109
Nicol Bolas Avatar answered Nov 08 '22 06:11

Nicol Bolas