I use a logarithmic depth buffer: in my shader I wrote the code describes here.
In the vertex shader:
void main()
{
vec4 vertexPosClip = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_Position = vertexPosClip;
gl_TexCoord[6] = vertexPosClip;
}
In the fragment shader:
void main()
{
gl_FragColor = ...
const float C = 1.0;
const float far = 1000000000.0;
const float offset = 1.0;
gl_FragDepth = (log(C * gl_TexCoord[6].z + offset) / log(C * far + offset));
}
Now in one of my algorithms (SSAO which works well with a classic depth buffer), I want to recover the depth of my fragment. Here is the formula I use:
const float C = 1.0;
const float far = 1000000000.0;
const float offset = 1.0;
//depth
z=(pow(C*far+offset,depth_in_depth_buffer)-offset)/C;
However it doesn't work, the algorithm doesn't work anymore. Do you know if the z I have computed is the correct depth?
How did you prepare the gl_ModelViewProjectionMatrix? If you used the standard one, vertexPosClip.z will contain depth that is already projected, and thus wrong.
Without modifying the matrix, you can use the w component that contains positive depth into the scene:
gl_FragDepth = (log(C * gl_TexCoord[6].w + offset) / log(C * far + offset));
The computation of z from the depth buffer value seems to be correct, just don't forget you are getting a positive depth into the scene, not a negative one as is standard in OpenGL.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With