I've seen the occasional article suggest ordering your vertices from nearest to furthest from the camera when sending them to OpenGL (for any of the OpenGL variants). The reason suggested by this is that OpenGL will not fully process/render a vertex if it is behind another vertex already rendered.
Since ordering vertices by depth is a costly component of any project, as typically this ordering frequently changes, how common or necessary is such design?
I had previously thought that OpenGL would "look" at all the vertices submitted and process its own depth buffering on them, regardless of their order, before rendering the entire batch. But if in fact a vertex gets rendered to the screen before another, then I can see how ordering might benefit performance.
Is drawing front-to-back necessary for optimizing renders?
How to Draw Rendering: for Makers. 1 Step 1: Casting a Shadow. The easiest way to cast an accurate shadow from a simple figure is by dropping the top face of the figure to the ground ... 2 Step 2: Adding Dimension With Line Weight. 3 Step 3: Tone. 4 Step 4: General Rendering Techniques + Tips. 5 Be the First to Share. More items
Let’s face it: Creating great renderings is hard. The process can be time consuming, tedious and expensive, and it’s easy to become overwhelmed by the countless amount of components that go into producing a quality rendering. However, they’re an essential means of communicating the fine details, ideas and qualities of an architectural project.
Through Photoshop you can easily tweak textures, lighting, and adjust the atmosphere and tone of your rendering. Hogrefe, who creates breathtaking renderings by integrating real-life through Photoshop, states: “A Photoshop-heavy workflow gives us the advantage of filling in broad swaths of information quickly.
Make Your Backdrop Believable A rendering’s backdrop, such as a sky, can make or break an architectural illustration. The backdrop sets the entire mood and tone of an image, which is why developing the perfect backdrop should be a top priority.
Once a primitive is rasterized, its z value can be used to do an "early z kill", which skips running the fragment shader. That's the main reason to render front-to-back. Tip: When you have transparent (alpha textured) polygons, you must render back-to-front.
The OpenGL spec defines a state machine and does not specify in what order the rendering actually happens, only that the results should be correct (within certain tolerances).
Edit for clarity: What I'm trying to say above is that the hardware can do whatever it wants, as long as the primitives appear to have been processed in order
However, most GPUs are streaming processors and their OpenGL drivers do not "batch up" geometry, except perhaps for performance reasons (minimum DMA size, etc). If you feed in polygon A followed by polygon B, then they are fed into the pipeline one after the other and are processed independently (for the most part) of each other. If there are a sufficient number of polys between A and B, then there's a good chance A completes before B, and if B was behind A, its fragments will be discarded via "early z kill".
Edit for clarity: What I'm trying to say above is that since hw does not "batch up" geometry, it cannot do the front-to-back ordering automatically.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With