Given an arbitrary 3D mesh, I'm looking for an algorithm that would perform hidden-line removal in real-time. I'm working in the context of OpenGL, which means that we can take advantage of the Z-Buffer.
I guess that the algorithm should include a solution to the two following problems:
1) Defining which are the "hard-edges" for later-on drawing them using regular OpenGL lines. These "hard-edges" should correspond to edges where the angle between the 2 corresponding faces is above some threshold.
For the sake of simplicity, let's state that it is guaranteed that no more than 2 faces are defined per edge.
The calculation of the "hard-edges" should take place once per mesh, i.e. it is not related to the view-point.
2) Defining the outline of the mesh's silhouette, according to the current view-point. Eventually, this part could be done using classical OpenGL techniques (involving polygon-offset or the stencil-buffer), but it would be preferable to draw the silhouette using regular OpenGL lines, to keep an unified look & feel for all the lines.
For that part, I'm not sure if the vertices of the silhouette should all pass through mesh vertices or not. In any case, for meshes like cubes _ where there is no need for a silhouette (since it is enough to draw only the "hard-edges") _ the algorithm should be smart enough to avoid drawing a "similar line" twice...
Priority algorithm • This algorithm is also known as depth or Z algorithm. Imagines that objects are modelled with lines and lines are generated where surfaces join. If only the visible surfaces are created then the invisible lines are automatically removed by this algorithm.
We have discussed five different hidden surface algorithms: z-buffer, scan line, ray casting, depth sort, and bsp-tree. Two key ideas are applied to help increase the speed of these algorithms: sorting of edges by depth, and pixel coherence for depth and intensity.
Z - Buffer Algorithm If the present pixel is behind the pixel in the Z-buffer, the pixel is eliminated, or else it is shaded and its depth value changes the one in the Z-buffer. Z-buffering helps dynamic visuals easily, and is presently introduced effectively in graphics hardware.
There are two approaches for removing hidden surface problems − Object-Space method and Image-space method. The Object-space method is implemented in physical coordinate system and image-space method is implemented in screen coordinate system.
There are a couple of things going on here. First, you want to draw the lines of the mesh, and second you want to draw a silhouette. Here is a generic procedure to make this work,
Draw the (using triangles) mesh to the depth buffer only by clearing the color mask.
Turn the color mask back on, switch the front face, and rescale/offset your mesh by some small percent. Flipping the front face causes you to only see the inside of the offset mesh, which gets clipped by the depth buffer of the previously drawn mesh. If you do this right, it should give you a neat looking outline. Here is an example of this technique: http://www.codeproject.com/KB/openGL/Outline_Mode.aspx
Finally, draw the edges of the mesh (while keeping the depth buffer intact from the previous two operations) over the existing mesh and shell.
The result is that you will now have all the edges of your mesh drawn, together with a nice silhouette!
EDIT: After rereading your post a second time, it sounds like you don't want to draw all the edges, only those which occur at a boundary with sufficiently high curvature. So, to do this, you could do one of the following:
Preprocess the edges of your mesh, and cull out all of the edges which link pairs of nearly coplanar faces. This is easy to check by just comparing the dot products of their normals. If it sufficiently close to 1, discard that edge from your rendered set.
More generally, you can also approximate the curvature of your mesh in screen space. Doing this is the inverse of computing the so-called screen space ambient occlusion. (Another neat application of this technique is listed here: http://zigguratvertigo.com/2011/03/07/gdc-2011-approximating-translucency-for-a-fast-cheap-and-convincing-subsurface-scattering-look/) Once you have the curvature of your object computed from the depth buffer, you can filter out the lines by only drawing the line fragments that occur on pixels with sufficiently high curvature.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With