I am writing a deferred shader, and am trying to pack my gbuffer more tightly. However, I cant seem to compute the view position given the view space depth correctly
// depth -> (gl_ModelViewMatrix * vec4(pos.xyz, 1)).z; where pos is the model space position
// fov -> field of view in radians (0.62831855, 0.47123888)
// p -> ndc position, x, y [-1, 1]
vec3 getPosition(float depth, vec2 fov, vec2 p)
{
vec3 pos;
pos.x = -depth * tan( HALF_PI - fov.x/2.0 ) * (p.x);
pos.y = -depth * tan( HALF_PI - fov.y/2.0 ) * (p.y);
pos.z = depth;
return pos;
}
The computed position is wrong. I know this because I am still storing the correct position in the gbuffer and testing using that.
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from view (eye) space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
Perspective Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
it follows:
aspect = w / h
tanFov = tan( fov_y * 0.5 );
prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov
At Perspective Projection, the Z component is calculated by the rational function:
z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye
The depth (gl_FragCoord.z
and gl_FragDepth
) is calculated as follows:
z_ndc = clip_space_pos.z / clip_space_pos.w;
depth = (((farZ-nearZ) * z_ndc) + nearZ + farZ) / 2.0;
Since the projection matrix is defined by the field of view and the aspect ratio it is possible to recover the viewport position with the field of view and the aspect ratio. Provided that it is a symmetrical perspective projection and the normalized device coordinates, the depth and the near and far plane are known.
Recover the Z distance in view space:
z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));
Recover the view space position by the XY normalized device coordinates:
ndc_x, ndc_y = xy normalized device coordinates in range from (-1, -1) to (1, 1):
viewPos.x = z_eye * ndc_x * aspect * tanFov;
viewPos.y = z_eye * ndc_y * tanFov;
viewPos.z = -z_eye;
The projection parameters, defined by the field of view and the aspect ratio, are stored in the projection matrix. Therefore the viewport position can be recovered by the values from the projection matrix, from a symmetrical perspective projection.
Note the relation between projection matrix, field of view and aspect ratio:
prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect);
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov;
prjMat[2][2] = -(f+n)/(f-n)
prjMat[3][2] = -2*f*n/(f-n)
Recover the Z distance in view space:
A = prj_mat[2][2];
B = prj_mat[3][2];
z_ndc = 2.0 * depth - 1.0;
z_eye = B / (A + z_ndc);
Recover the view space position by the XY normalized device coordinates:
viewPos.x = z_eye * ndc_x / prjMat[0][0];
viewPos.y = z_eye * ndc_y / prjMat[1][1];
viewPos.z = -z_eye;
Of course the viewport position can be recovered by the inverse projection matrix.
mat4 inversePrjMat = inverse( prjMat );
vec4 viewPosH = inversePrjMat * vec3( ndc_x, ndc_y, 2.0 * depth - 1.0, 1.0 )
vec3 viewPos = viewPos.xyz / viewPos.w;
See also the answers to the following question:
I managed to make it work in the end, As its a different method from above I will detail it so anyone who sees this will have a solution.
Now, we have all the values we need in view space. We can use the law of similar triangles to find the actual fragment position P'
P = P_ndc * near * tan(fov/2.0f) // computation is the same for x, y
// Note that by law of similar triangles, P'.x / depth = P/near
P'.xy = P/near * -depth; // -depth because in opengl the camera is staring down the -z axis
P'.z = depth;
I wrote a deferred shader, and used this code to recalculate screen space positioning:
vec3 getFragmentPosition()
{
vec4 sPos = vec4(gl_TexCoord[0].x, gl_TexCoord[0].y, texture2D(depthTex, gl_TexCoord[0].xy).x, 1.0);
sPos.z = 2.0 * sPos.z - 1.0;
sPos = invPersp * sPos;
return sPos.xyz / sPos.w;
}
where depthTex
is the texture holding depth info, and invPersp
is a pre-calculated inverse perspective matrix. You take the screen's fragment position, and multiply it by the inverse perspective matrix to get model-view coordinates. Then you divide by w
to get homogenous coordinates. The multiplication by two and subtraction by one is to scale the depth from [0, 1] (as it is stored in the texture) to [-1, 1].
Also, depending on what kind of MRTs you are using, the recalculated result won't be exactly equal to the stored info, since you lose the float precision.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With