In OpenGL you can linearize a depth value like so:
float linearize_depth(float d,float zNear,float zFar)
{
float z_n = 2.0 * d - 1.0;
return 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
(Source: https://stackoverflow.com/a/6657284/10011415)
However, Vulkan handles depth values somewhat differently (https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/). I don't quite understand the math behind it, what changes would I have to make to the function to linearize a depth value with Vulkan?
To linearize the sampled depth-buffer value, we can multiply the native device coordinates (ndc) vector by the inverse projection matrix and divide the result by the w coordinate (as the result is a homogenous vector).
When depth testing is enabled, OpenGL tests the depth value of a fragment against the content of the depth buffer. OpenGL performs a depth test and if this test passes, the fragment is rendered and the depth buffer is updated with the new depth value. If the depth test fails, the fragment is discarded.
z' = (2^d -1 ) * ((far + near)/(2 * (far - near) + (1 / z) * (-far * near) / (far - near) + 1/2) when d is the depth of the z-buffer (24 bits at my case) and z is the z value of the vertex.
The important difference between OpenGL and Vulkan here is that the normalized device coordinates (NDC) have a different range for z (the depth). In OpenGL it's -1 to 1 and in Vulkan it's 0 to 1.
However, in OpenGL when the depth is stored into a depth texture and you read from it, the value is further normalized to 0 to 1. This seems to be the case in your example, since the first line of your function maps it back to -1 to 1.
In Vulkan, your depth is always between 0 and 1, so the above function works in Vulkan as well. You can simplify it a bit though:
float linearize_depth(float d,float zNear,float zFar)
{
return zNear * zFar / (zFar + d * (zNear - zFar));
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With