This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want.
A lot of tutorials, and even SO questions have similar tips; generally covering:
And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change.
Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use?
My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance.
Note: I am looking into gDEBugger at the moment
Cross posted at Game Development
OpenGL has been designed to allow applications to be written easily so they look the same on various hardware. It is thus a high-level library.
In OpenGL, an image is sent to the GPU through Texture Objects. This Texture Object is placed in a Texture-Unit. The fragment shader references this texture-unit through a Sampler. The fragment shader then uses the U-V coordinates along with the sampler data to properly map an image to the game character.
The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering. Video games outsource real-time rendering calculations to the GPU over OpenGL. The rendered results are not sent back to main memory, but to the framebuffer of video memory instead.
Point 1 is obvious, as is saves fill rate. In case the primitives of an objects backside get processed first this will omit those faces. However modern GPUs tolerate overdraw quite well. I once (GeForce8800 GTX) measured up to 20% overdraw before significant performance hit. But it's better to save this reserve for things like occlusion culling, rendering of blended geometry and the like.
Point 2 is, well pointless. The matrices never have been calculated on the GPU – well, if you don't count SGI Onyx. Matrices always were just some kind of rendering global parameter calculated on the CPU, then pushed into global registers on the GPU, now called a uniform, so joining them has only very little benefit. In the shader that saves only one additional vector matrix multiplication (boils down to 4 MAD instructions), at the expense of less algorithmic flexibility.
Point 3 is all about cache efficiency. Data belonging together should fit into a cache line.
Point 4 is about preventing state changes trashing the caches. But it strongly depends which GL calls they mean. Changing uniforms is cheap. Switching a texture is expensive. The reason is, that a uniform sits in a register, not some piece of memory that's cached. Switching a shader is expensive, because different shaders exhibit different runtime behaviour, thus trashing the pipeline execution predition, altering memory (and thus) cache access patterns and so on.
But those are all micro optimizations (some of them with huge impact). However I recommend looking in large impact optimizations, like implementing an early Z pass; using occlusion query in th early Z for quick discrimination of whole geometry batches. One large impact optimization, that essentially consists of summing up a lot of Point-4 like micro optimizations is to sort render batches by expensive GL states. So group everything with common shaders, within those groups sort by texture and so on. This state grouping will only affect the visible render passes. In early Z you're only testing outcomes on the Z buffer so there's only geometry transformation and the fragment shaders will just pass the Z value.
The first thing you need to know is where exactly is your bottleneck. GPU is not an answer, because it's a complex system. The actual problem might be among these:
You need to perform a series of test to see the problem. For example, draw everything to a bigger FBO to see if it's a fill rate problem (or increase MSAA amount). Or draw everything twice to check the draw call overload issues.
Just to add my 2 cents to @kvark and @datenwolf answers, I'd like to say that, while the points you mention are 'basic' GPU performance tips, more involved optimization is very application dependent.
In your geometry-heavy test case, you're already throwing 28 million triangles * 40 FPS = 1120 million triangles per second - this is already quite a lot : most (not all, esp Fermi) GPU out there have a triangle setup performance of 1 triangle per GPU clock cycle. Meaning that a GPU running at 800MHz, say, cannot process more than 800 million triangles per second ; this without even drawing a single pixel. NVidia Fermi can process 4 triangles per clock cycle.
If you're hitting this limit (you don't mention your hardware platform), there's not much you can do at the OpenGL/GPU level. All you can do is send less geometry, via more efficient culling (frustum or occlusion), or via a LOD scheme.
Another thing is that tiny triangles hurt fillrate as rasterizers do parrallel processing on square blocks of pixels ; see http://www.geeks3d.com/20101201/amd-graphics-blog-tessellation-for-all/.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With