I'm writing a ray tracer (mostly for fun) and whilst I've written one in the past, and spent a decent amount of time searching, no tutorials seem to shed light on the way to calculate the eye rays in a perspective projection, without using matrices.
I believe the last time I did it was by (potentially) inefficiently rotating the eye vectors x/y
degrees from the camera direction vector using a Quaternion
class. This was in C++, and I'm doing this one in C#, though that's not so important.
Pseudocode (assuming V * Q = transform operation)
yDiv = fovy / height
xDiv = fovx / width
for x = 0 to width
for y = 0 to height
xAng = (x / 2 - width) * xDiv
yAng = (y / 2 - height) * yDiv
Q1 = up vector, xAng
Q2 = camera right vector, yAng
Q3 = mult(Q1, Q2)
pixelRay = transform(Q3, camera direction)
raytrace pixelRay
next
next
I think the actual problem with this is that it's simulating a spherical screen surface, not a flat screen surface.
Mind you, whilst I know how and why to use cross products, dot products, matrices and such, my actual 3D mathematics problem solving skills aren't fantastic.
So given:
What is the actual method to produce an eye ray for x/y pixel coordinates for a raytracer?
To clarify: I exactly what I'm trying to calculate, I'm just not great at coming up with the 3D math to compute it, and no ray tracer code I find seems to have the code I need to compute the eye ray for an individual pixel.
The Whitted algorithm is the classical example of an algorithm that uses ray-tracing to produce photo-realistic computer generated images. Many more advanced light transport algorithms have been developed since the paper was first published.
P=orig+t∗dir. Where t is the distance from the origin of the point to the point on the half-line. This variable can either be negative or positive. It t is negative, the point on the ray is behind the ray origin and if t is positive, the point P is in "front" of the ray's origin.
Primary rays are released directly by a release feature. They are called primary rays because their release is not contingent on the prior existence of any other ray.
INPUT: camera_position_vec, direction_vec, up_vec, screen_distance
right_vec = direction_vec x up_vec
for y from 0 to 1600:
for x from 0 to 2560:
# location of point in 3d space on screen rectangle
P_3d = camera_position_vec + screen_distance*direction_vec
+ (y-800)*-up_vec
+ (x-1280)*right_vec
ray = Ray(camera_position_vec, P_3d)
yield "the eye-ray for `P_2d` is `ray`"
x
means the cross product
edit:
The answer assumed that direction_vec
is normalized, as it should be. right_vec
is in the picture (seemingly where the left should be), but right_vec
is not necessary and, if included, should always be in the same direction as -(up_vec x direction_vec)
. Furthermore the picture implies the x-coord increases as one goes right, and the y-coord increases as one goes down. The signs have been changed slightly to reflect that. A zoom may either be performed by multiplying the x- and y- terms in the equation, or more efficiently, multiplying the vectors and using scaled_up_vec
and scaled_right_vec
. A zoom is however equivalent (since aperture doesn't matter; this is a perfect pinhole camera) to changing the field of view (FoV) which is a much better nicer quantity to deal with than arbitrary "zoom". For information about how to implement FoV, seem my comment below.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With