Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I improve the performance of my custom OpenGL ES 2.0 depth texture generation?

I have an open source iOS application that uses custom OpenGL ES 2.0 shaders to display 3-D representations of molecular structures. It does this by using procedurally generated sphere and cylinder impostors drawn over rectangles, instead of these same shapes built using lots of vertices. The downside to this approach is that the depth values for each fragment of these impostor objects needs to be calculated in a fragment shader, to be used when objects overlap.

Unfortunately, OpenGL ES 2.0 does not let you write to gl_FragDepth, so I've needed to output these values to a custom depth texture. I do a pass over my scene using a framebuffer object (FBO), only rendering out a color that corresponds to a depth value, with the results being stored into a texture. This texture is then loaded into the second half of my rendering process, where the actual screen image is generated. If a fragment at that stage is at the depth level stored in the depth texture for that point on the screen, it is displayed. If not, it is tossed. More about the process, including diagrams, can be found in my post here.

The generation of this depth texture is a bottleneck in my rendering process and I'm looking for a way to make it faster. It seems slower than it should be, but I can't figure out why. In order to achieve the proper generation of this depth texture, GL_DEPTH_TEST is disabled, GL_BLEND is enabled with glBlendFunc(GL_ONE, GL_ONE), and glBlendEquation() is set to GL_MIN_EXT. I know that a scene output in this manner isn't the fastest on a tile-based deferred renderer like the PowerVR series in iOS devices, but I can't think of a better way to do this.

My depth fragment shader for spheres (the most common display element) looks to be at the heart of this bottleneck (Renderer Utilization in Instruments is pegged at 99%, indicating that I'm limited by fragment processing). It currently looks like the following:

precision mediump float;  varying mediump vec2 impostorSpaceCoordinate; varying mediump float normalizedDepth; varying mediump float adjustedSphereRadius;  const vec3 stepValues = vec3(2.0, 1.0, 0.0); const float scaleDownFactor = 1.0 / 255.0;  void main() {     float distanceFromCenter = length(impostorSpaceCoordinate);     if (distanceFromCenter > 1.0)     {         gl_FragColor = vec4(1.0);     }     else     {         float calculatedDepth = sqrt(1.0 - distanceFromCenter * distanceFromCenter);         mediump float currentDepthValue = normalizedDepth - adjustedSphereRadius * calculatedDepth;          // Inlined color encoding for the depth values         float ceiledValue = ceil(currentDepthValue * 765.0);          vec3 intDepthValue = (vec3(ceiledValue) * scaleDownFactor) - stepValues;          gl_FragColor = vec4(intDepthValue, 1.0);     } } 

On an iPad 1, this takes 35 - 68 ms to render a frame of a DNA spacefilling model using a passthrough shader for display (18 to 35 ms on iPhone 4). According to the PowerVR PVRUniSCo compiler (part of their SDK), this shader uses 11 GPU cycles at best, 16 cycles at worst. I'm aware that you're advised not to use branching in a shader, but in this case that led to better performance than otherwise.

When I simplify it to

precision mediump float;  varying mediump vec2 impostorSpaceCoordinate; varying mediump float normalizedDepth; varying mediump float adjustedSphereRadius;  void main() {     gl_FragColor = vec4(adjustedSphereRadius * normalizedDepth * (impostorSpaceCoordinate + 1.0) / 2.0, normalizedDepth, 1.0); } 

it takes 18 - 35 ms on iPad 1, but only 1.7 - 2.4 ms on iPhone 4. The estimated GPU cycle count for this shader is 8 cycles. The change in render time based on cycle count doesn't seem linear.

Finally, if I just output a constant color:

precision mediump float;  void main() {     gl_FragColor = vec4(0.5, 0.5, 0.5, 1.0); } 

the rendering time drops to 1.1 - 2.3 ms on iPad 1 (1.3 ms on iPhone 4).

The nonlinear scaling in rendering time and sudden change between iPad and iPhone 4 for the second shader makes me think that there's something I'm missing here. A full source project containing these three shader variants (look in the SphereDepth.fsh file and comment out the appropriate sections) and a test model can be downloaded from here, if you wish to try this out yourself.

If you've read this far, my question is: based on this profiling information, how can I improve the rendering performance of my custom depth shader on iOS devices?

like image 818
Brad Larson Avatar asked May 18 '11 21:05

Brad Larson


1 Answers

Based on the recommendations by Tommy, Pivot, and rotoglup, I've implemented some optimizations which have led to a doubling of the rendering speed for the both the depth texture generation and the overall rendering pipeline in the application.

First, I re-enabled the precalculated sphere depth and lighting texture that I'd used before with little effect, only now I use proper lowp precision values when handling the colors and other values from that texture. This combination, along with proper mipmapping for the texture, seems to yield a ~10% performance boost.

More importantly, I now do a pass before rendering both my depth texture and the final raytraced impostors where I lay down some opaque geometry to block pixels that would never be rendered. To do this, I enable depth testing and then draw out the squares that make up the objects in my scene, shrunken by sqrt(2) / 2, with a simple opaque shader. This will create inset squares covering area known to be opaque in a represented sphere.

I then disable depth writes using glDepthMask(GL_FALSE) and render the square sphere impostor at a location closer to the user by one radius. This allows the tile-based deferred rendering hardware in the iOS devices to efficiently strip out fragments that would never appear onscreen under any conditions, yet still give smooth intersections between the visible sphere impostors based on per-pixel depth values. This is depicted in my crude illustration below:

Layered spheres and opacity testing

In this example, the opaque blocking squares for the top two impostors do not prevent any of the fragments from those visible objects from being rendered, yet they block a chunk of the fragments from the lowest impostor. The frontmost impostors can then use per-pixel tests to generate a smooth intersection, while many of the pixels from the rear impostor don't waste GPU cycles by being rendered.

I hadn't thought to disable depth writes, yet leave on depth testing when doing the last rendering stage. This is the key to preventing the impostors from simply stacking on one another, yet still using some of the hardware optimizations within the PowerVR GPUs.

In my benchmarks, rendering the test model I used above yields times of 18 - 35 ms per frame, as compared to the 35 - 68 ms I was getting previously, a near doubling in rendering speed. Applying this same opaque geometry pre-rendering to the raytracing pass yields a doubling in overall rendering performance.

Oddly, when I tried to refine this further by using inset and circumscribed octagons, which should cover ~17% fewer pixels when drawn, and be more efficient with blocking fragments, performance was actually worse than when using simple squares for this. Tiler utilization was still less than 60% in the worst case, so maybe the larger geometry was resulting in more cache misses.

EDIT (5/31/2011):

Based on Pivot's suggestion, I created inscribed and circumscribed octagons to use instead of my rectangles, only I followed the recommendations here for optimizing triangles for rasterization. In previous testing, octagons yielded worse performance than squares, despite removing many unnecessary fragments and letting you block covered fragments more efficiently. By adjusting the triangle drawing as follows:

Rasterization optimizing octagons

I was able to reduce overall rendering time by an average of 14% on top of the above-described optimizations by switching to octagons from squares. The depth texture is now generated in 19 ms, with occasional dips to 2 ms and spikes to 35 ms.

EDIT 2 (5/31/2011):

I've revisited Tommy's idea of using the step function, now that I have fewer fragments to discard due to the octagons. This, combined with a depth lookup texture for the sphere, now leads to a 2 ms average rendering time on the iPad 1 for the depth texture generation for my test model. I consider that to be about as good as I could hope for in this rendering case, and a giant improvement from where I started. For posterity, here is the depth shader I'm now using:

precision mediump float;  varying mediump vec2 impostorSpaceCoordinate; varying mediump float normalizedDepth; varying mediump float adjustedSphereRadius; varying mediump vec2 depthLookupCoordinate;  uniform lowp sampler2D sphereDepthMap;  const lowp vec3 stepValues = vec3(2.0, 1.0, 0.0);  void main() {     lowp vec2 precalculatedDepthAndAlpha = texture2D(sphereDepthMap, depthLookupCoordinate).ra;      float inCircleMultiplier = step(0.5, precalculatedDepthAndAlpha.g);      float currentDepthValue = normalizedDepth + adjustedSphereRadius - adjustedSphereRadius * precalculatedDepthAndAlpha.r;      // Inlined color encoding for the depth values     currentDepthValue = currentDepthValue * 3.0;      lowp vec3 intDepthValue = vec3(currentDepthValue) - stepValues;      gl_FragColor = vec4(1.0 - inCircleMultiplier) + vec4(intDepthValue, inCircleMultiplier); } 

I've updated the testing sample here, if you wish to see this new approach in action as compared to what I was doing initially.

I'm still open to other suggestions, but this is a huge step forward for this application.

like image 70
Brad Larson Avatar answered Sep 27 '22 18:09

Brad Larson