Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is discard bad for program performance in OpenGL?

Tags:

I was reading this article, and the author writes:

Here's how to write high-performance applications on every platform in two easy steps:
[...]
Follow best practices. In the case of Android and OpenGL, this includes things like "batch draw calls", "don't use discard in fragment shaders", and so on.

I have never before heard that discard would have a bad impact on performance or such, and have been using it to avoid blending when a detailed alpha hasn't been necessary.

Could someone please explain why and when using discard might be considered a bad practise, and how discard + depthtest compares with alpha + blend?

Edit: After having received an answer on this question I did some testing by rendering a background gradient with a textured quad on top of that.

  • Using GL_DEPTH_TEST and a fragment-shader ending with the line "if( gl_FragColor.a < 0.5 ){ discard; }" gave about 32 fps.
  • Removing the if/discard statement from the fragment-shader increased the rendering speed to about 44 fps.
  • Using GL_BLEND with the blend function "(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)" instead of GL_DEPTH_TEST also resulted in around 44 fps.
like image 654
Jave Avatar asked Dec 14 '11 17:12

Jave


1 Answers

It's hardware-dependent. For PowerVR hardware, and other GPUs that use tile-based rendering, using discard means that the TBR can no longer assume that every fragment drawn will become a pixel. This assumption is important because it allows the TBR to evaluate all the depths first, then only evaluate the fragment shaders for the top-most fragments. A sort of deferred rendering approach, except in hardware.

Note that you would get the same issue from turning on alpha test.

like image 145
Nicol Bolas Avatar answered Sep 20 '22 23:09

Nicol Bolas