There used to be field of view information in the VREyeParameters
, but that was deprecated. So now i am wondering: Is possible to calculate that using the view/projection matrices provided by VRFrameData
?
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. Clip space coordinates are Homogeneous coordinates. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w
component of the clip coordinates.
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
If you want to know the corners of the camera frustum in view space, then you have to transform the corners of the normalized device space (-1, -1, -1), ..., (1, 1, 1) by the inverse projection matrix. To get cartesian coordinates, the X, Y, and Z component of the result has to be divided by the W (4th) component of the result.
glMatrix is a library which provides matrix operations and data types such as mat4
and vec4
:
projection = mat4.clone( VRFrameData.leftProjectionMatrix );
inverse_prj = mat4.create();
mat4.invert( inverse_prj, projection );
pt_ndc = [-1, -1, -1];
v4_ndc = vec4.fromValues( pt_ndc[0], pt_ndc[1], pt_ndc[2], 1 );
v4_view = vec4.create();
vec4.transformMat4( v4_view, v4_ndc, inverse_prj );
pt_view = [v4_view[0]/v4_view[3], v4_view[1]/v4_view[3], v4_view[2]/v4_view[3]];
The transformation view coordinates to world coordinates can be done by the inverse view matrix.
view = mat4.clone( VRFrameData.leftViewMatrix );
inverse_view = mat4.create();
mat4.invert( inverse_view, view );
v3_view = vec3.clone( pt_view );
v3_world = vec3.create();
mat4.transformMat4( v3_world, v3_view, inverse_view );
Note, the left and right projection matrix are not symmetric. This means the line of sight is not in the center of the frustum and they are different for the left and the right eye.
Further note, a perspective projection matrix looks like this:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
where :
a = w / h
ta = tan( fov_y / 2 );
2 * n / (r-l) = 1 / (ta * a)
2 * n / (t-b) = 1 / ta
If the projection is symmetric, where the line of sight is in the center of the view port and the field of view is not displaced, then the matrix can be simplified:
1/(ta*a) 0 0 0
0 1/ta 0 0
0 0 -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
This means the field of view angle can be calculated by:
fov_y = Math.atan( 1/prjMat[5] ) * 2; // prjMat[5] is prjMat[1][1]
and the aspect ratio by:
aspect = prjMat[5] / prjMat[0];
The calculation for the field of view angle also works, if the projection matrix is only symmetric along the horizontal. This means if -bottom
is equal to top
. For the projection matrices of the 2 eyes this should be the case.
Furthermore:
z_ndc = 2.0 * depth - 1.0;
z_eye = 2.0 * n * f / (f + n - z_ndc * (f - n));
by substituting the fields of the projection matrix this is:
A = prj_mat[2][2]
B = prj_mat[3][2]
z_eye = B / (A + z_ndc)
This means the distance to the near plane and to the far plane can be calculated by:
A = prj_mat[10]; // prj_mat[10] is prj_mat[2][2]
B = prj_mat[14]; // prj_mat[14] is prj_mat[3][2]
near = - B / (A - 1);
far = - B / (A + 1);
SOHCAHTOA pronounced "So", "cah", "toe-ah"
Tells us the relationships of the various sides of a right triangle to various trigonometry functions
So looking at a frustum image we can take the right triangle from the eye to the near plane to the top of the frustum to compute the tangent of the field of view and we can use the arc tangent to turn a tangent back into an angle.
Since we know the result of the projection matrix takes our world space frustum and converts it to clip space and ultimately to normalized device space (-1, -1, -1) to (+1, +1, +1) we can get the positions we need by multiplying the corresponding points in NDC space by the inverse of the projection matrix
eye = 0,0,0
centerAtNearPlane = inverseProjectionMatrix * (0,0,-1)
topCenterAtNearPlane = inverseProjectionMatrix * (0, 1, -1)
Then
opposite = topCenterAtNearPlane.y
adjacent = -centerAtNearPlane.z
halfFieldOfView = Math.atan2(opposite, adjacent)
fieldOfView = halfFieldOfView * 2
Let's test
const m4 = twgl.m4;
const fovValueElem = document.querySelector("#fovValue");
const resultElem = document.querySelector("#result");
let fov = degToRad(45);
function updateFOV() {
fovValueElem.textContent = radToDeg(fov).toFixed(1);
// get a projection matrix from somewhere (like VR)
const projection = getProjectionMatrix();
// now that we have projection matrix recompute the FOV from it
const inverseProjection = m4.inverse(projection);
const centerAtZNear = m4.transformPoint(inverseProjection, [0, 0, -1]);
const topCenterAtZNear = m4.transformPoint(inverseProjection, [0, 1, -1]);
const opposite = topCenterAtZNear[1];
const adjacent = -centerAtZNear[2];
const halfFieldOfView = Math.atan2(opposite, adjacent);
const fieldOfView = halfFieldOfView * 2;
resultElem.textContent = radToDeg(fieldOfView).toFixed(1);
}
updateFOV();
function getProjectionMatrix() {
// doesn't matter. We just want a projection matrix as though
// someone else made it for us.
const aspect = 2 / 1;
// choose some zNear and zFar
const zNear = .5;
const zFar = 100;
return m4.perspective(fov, aspect, zNear, zFar);
}
function radToDeg(rad) {
return rad / Math.PI * 180;
}
function degToRad(deg) {
return deg / 180 * Math.PI;
}
document.querySelector("input").addEventListener('input', (e) => {
fov = degToRad(parseInt(e.target.value));
updateFOV();
});
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<input id="fov" type="range" min="1" max="179" value="45"><label>fov: <span id="fovValue"></span></label>
<div>computed fov: <span id="result"></span></div>
Note this assumes the center of the frustum is directly in front of the eye. If it's not then you'd probably have to compute adjacent
by computing the length of the vector from the eye to centerAtZNear
const v3 = twgl.v3;
...
const adjacent = v3.length(centerAtZNear);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With