In OpenGL, the default setting is to report errors automatically when they occur. They can either be queried using glGetError
or via an error callback set with glDebugMessageCallback
.
Doesn't this approach use unnecessary resources when no errors are actually thrown?
To save resources, I'd like to know how to disable this mechanism. I am thinking to disable it in a "release" version of my application, where no errors are expected to be thrown.
It's safe to assume that the internal API error checking by OpenGL introduces a non-zero overhead at runtime. How much overhead depends on the actual OpenGL implementation used.
Since OpenGL 4.6, OpenGL allows to create a context without error checking by setting the GL_CONTEXT_FLAG_NO_ERROR_BIT
flag during context creation.
More details can be found
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With