Sampling from a depth buffer in a shader returns values between 0 and 1, as expected. Given the near- and far- clip planes of the camera, how do I calculate the true z value at this point, i.e. the distance from the camera?
z' = (2^d -1 ) * ((far + near)/(2 * (far - near) + (1 / z) * (-far * near) / (far - near) + 1/2) when d is the depth of the z-buffer (24 bits at my case) and z is the z value of the vertex.
Z-buffering, also known as depth buffering, is a technique in computer graphics programming. It is used to determine whether an object (or part of an object) is visible in a scene. It can be implemented either in hardware or software, and is used to increase rendering efficiency.
The depth buffer is automatically created by the windowing system and stores its depth values as 16 , 24 or 32 bit floats. In most systems you'll see a depth buffer with a precision of 24 bits. When depth testing is enabled, OpenGL tests the depth value of a fragment against the content of the depth buffer.
To linearize the sampled depth-buffer value, we can multiply the native device coordinates (ndc) vector by the inverse projection matrix and divide the result by the w coordinate (as the result is a homogenous vector).
From http://web.archive.org/web/20130416194336/http://olivers.posterous.com/linear-depth-in-glsl-for-real
// == Post-process frag shader =========================================== uniform sampler2D depthBuffTex; uniform float zNear; uniform float zFar; varying vec2 vTexCoord; void main(void) { float z_b = texture2D(depthBuffTex, vTexCoord).x; float z_n = 2.0 * z_b - 1.0; float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear)); }
[edit] So here's the explanation (with 2 mistakes, see Christian's comment below) :
An OpenGL perspective matrix looks like this :
When you multiply this matrix by an homogeneous point [x,y,z,1], it gives you: [don't care, don't care, Az+B, -z] (with A and B the 2 big components in the matrix).
OpenGl next does the perspective division: it divides this vector by its w component. This operation is not done in shaders (except special cases like shadowmapping) but in hardware; you can't control it. w = -z, so the Z value becomes -A/z -B.
We are now in Normalized Device Coordinates. The Z value is between 0 and 1. For some stupid reason, OpenGL requires that it should be moved to the [-1,1] range (just like x and y). A scaling and offset is applied.
This final value is then stored in the buffer.
The above code does the exact opposite :
The opposite function is :
varying float depth; // Linear depth, in world units void main(void) { float A = gl_ProjectionMatrix[2].z; float B = gl_ProjectionMatrix[3].z; gl_FragDepth = 0.5*(-A*depth + B) / depth + 0.5; }
I know this is an old, old question, but I've found myself back here more than once on various occasions, so I thought I'd share my code that does the forward and reverse conversions.
This is based on @Calvin1602's answer. These work in GLSL or plain old C code.
uniform float zNear = 0.1; uniform float zFar = 500.0; // depthSample from depthTexture.r, for instance float linearDepth(float depthSample) { depthSample = 2.0 * depthSample - 1.0; float zLinear = 2.0 * zNear * zFar / (zFar + zNear - depthSample * (zFar - zNear)); return zLinear; } // result suitable for assigning to gl_FragDepth float depthSample(float linearDepth) { float nonLinearDepth = (zFar + zNear - 2.0 * zNear * zFar / linearDepth) / (zFar - zNear); nonLinearDepth = (nonLinearDepth + 1.0) / 2.0; return nonLinearDepth; }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With