Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?

I read lots of information about getting depth with fragment shader.

such as

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=234519

but I still don't know whether or not the gl_FragCoord.z is linear.

GLSL specification said its range is [0,1] in screen sapce without mentioning it's linear or not.

I think linearity it is vital since I will use the rendered model to match depth map from Kinect.

Then if it is not linear, how to linearlize it in the world space?

like image 763
tomriddle_1234 Avatar asked Oct 15 '11 12:10

tomriddle_1234


People also ask

How do you Linearize depth?

To linearize the sampled depth-buffer value, we can multiply the native device coordinates (ndc) vector by the inverse projection matrix and divide the result by the w coordinate (as the result is a homogenous vector).

How does OpenGL depth test work?

When depth testing is enabled, OpenGL tests the depth value of a fragment against the content of the depth buffer. OpenGL performs a depth test and if this test passes, the fragment is rendered and the depth buffer is updated with the new depth value. If the depth test fails, the fragment is discarded.

What is gl_FragCoord Z?

The gl_FragCoord variable also contains a z-component which contains the depth value of the fragment. This z value is the value that is compared to the depth buffer's content.

What is depth testing in computer graphics?

Depth Testing is a testing technique in which feature of a product is tested in full detail. Each of the feature is tested exhaustively during the integration phase and the defects are logged, are captured across all parameters, functional and non functional.


1 Answers

but I still don't know whether or not the gl_FragCoord.z is linear.

Whether gl_FragCoord.z is linear or not depends on, the projection matrix. While for Orthographic Projection gl_FragCoord.z is linear, for Perspective Projection it is not linear.

In general, the depth (gl_FragCoord.z and gl_FragDepth) is calculated as follows (see GLSL gl_FragCoord.z Calculation and Setting gl_FragDepth):

float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((farZ-nearZ) * ndc_depth) + nearZ + farZ) / 2.0;

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates

Orthographic Projection

At Orthographic Projection the coordinates in the eye space are linearly mapped to normalized device coordinates.

Orthographic Projection

Orthographic Projection Matrix:

r = right, l = left, b = bottom, t = top, n = near, f = far 

2/(r-l)         0               0               0
0               2/(t-b)         0               0
0               0               -2/(f-n)        0
-(r+l)/(r-l)    -(t+b)/(t-b)    -(f+n)/(f-n)    1

At Orthographic Projection, the Z component is calculated by the linear function:

z_ndc = z_eye * -2/(f-n) - (f+n)/(f-n)

Orthographic Z function

Perspective Projection

At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).

Perspective Projection

Perspective Projection Matrix:

r = right, l = left, b = bottom, t = top, n = near, f = far

2*n/(r-l)      0              0               0
0              2*n/(t-b)      0               0
(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)    -1    
0              0              -2*f*n/(f-n)    0

At Perspective Projection, the Z component is calculated by the rational function:

z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye

Perspective Z function

Depth buffer

Since the normalized device coordinates are in range (-1,-1,-1) to (1,1,1) the Z-coordinate has to be mapped to the depth buffer range [0,1]:

depth = (z_ndc + 1) / 2 


Then if it is not linear, how to linearize it in the world space?

To convert form the depth of the depth buffer to the original Z-coordinate, the projection (Orthographic or Perspective), and the near plane and far plane has to be known.

Orthographic Projection

n = near, f = far

z_eye = depth * (f-n) + n;

Perspective Projection

n = near, f = far

z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));

If the perspective projection matrix is known this can be done as follows:

A = prj_mat[2][2]
B = prj_mat[3][2]
z_eye = B / (A + z_ndc)

See also the answer to

How to recover view space position given view space depth value and ndc xy

like image 173
Rabbid76 Avatar answered Sep 20 '22 07:09

Rabbid76