Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Depth offset in OpenGL

What would be the best way of offsetting depth in OpenGL? I currently have index vertex attribute per polygon which I am passing to the Vertex Shader in OpenGL. My goal is to offset the polygons in depth where the highest index would be always in-front of the lower index. I currently have this simple approach modifying gl_Position.z.

gl_Position.z += -index * 0.00001;
like image 223
sabotage3d Avatar asked Jul 25 '17 22:07

sabotage3d


1 Answers

The usual way to set an automatic offset for the depth is glPolygonOffset(GLfloat factor,GLfloat units)

When GL_POLYGON_OFFSET_FILL, GL_POLYGON_OFFSET_LINE, or GL_POLYGON_OFFSET_POINT is enabled, each fragment's depth value will be offset after it is interpolated from the depth values of the appropriate vertices. The value of the offset is factor * DZ + r * units, where DZ is a measurement of the change in depth relative to the screen area of the polygon, and r is the smallest value that is guaranteed to produce a resolvable offset for a given implementation. The offset is added before the depth test is performed and before the value is written into the depth buffer.

glEnable( GL_POLYGON_OFFSET_FILL );
glPolygonOffset( 1.0, 1.0 );

If you want to manually manipulate the depth then you have to set gl_FragDepth inside the fragment shader.

gl_FragDepth, Fragment Shader:

Available only in the fragment language, gl_FragDepth is an output variable that is used to establish the depth value for the current fragment. If depth buffering is enabled and no shader writes to gl_FragDepth, then the fixed function value for depth will be used (this value is contained in the z component of gl_FragCoord) otherwise, the value written to gl_FragDepth is used.

In general, gl_FragDepth is calculated as follows (see GLSL gl_FragCoord.z Calculation and Setting gl_FragDepth):

float ndc_depth = clip_space_pos.z / clip_space_pos.w;
gl_FragDepth    = (((farZ-nearZ) * ndc_depth) + nearZ + farZ) / 2.0;

The minimum offset you need to add or subtract to the depth to get a minimum difference depends on the format of the depth buffer.

The depth buffer formats GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24 and GL_DEPTH_COMPONENT32 are a normalized integer formats, where the 16, 24 or 32 bit integer range is maped onto the depth values [0, 1].

On the other hand, the format GL_DEPTH_COMPONENT32F is a IEEE 754 standard 32 bit floating point format.

like image 89
Rabbid76 Avatar answered Sep 28 '22 07:09

Rabbid76