Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to move a camera using in a ray-tracer?

I am currently working on ray-tracing techniques and I think I've made a pretty good job; but, I haven't covered camera yet.

Until now, I used a plane fragment for view plane which is located between (-width/2, height/2, 200) and (width/2, -height/2, 200) [200 is just a fixed number of z, can be changed].

Addition to that, I use the camera mostly on e(0, 0, 1000), and I use a perspective projection.

I send rays from point e to pixels, and print it to image's corresponding pixel after calculating the pixel color.enter image description here

Here is a image I created. Hopefully you can guess where eye and view plane are by looking at the image.

My question starts from here. It's time to move my camera around, but I don't know how to map 2D view plane coordinates to the canonical coordinates. Is there a transformation matrix for that?

The method I think requires to know the 3D coordinates of pixels on view plane. I am not sure it's the right method to use. So, what do you suggest?

like image 561
hevele Avatar asked Oct 25 '12 22:10

hevele


2 Answers

There are a variety of ways to do it. Here's what I do:

  1. Choose a point to represent the camera location (camera_position).
  2. Choose a vector that indicates the direction the camera is looking (camera_direction). (If you know a point the camera is looking at, you can compute this direction vector by subtracting camera_position from that point.) You probably want to normalize (camera_direction), in which case it's also the normal vector of the image plane.
  3. Choose another normalized vector that's (approximately) "up" from the camera's point of view (camera_up).
  4. camera_right = Cross(camera_direction, camera_up)
  5. camera_up = Cross(camera_right, camera_direction) (This corrects for any slop in the choice of "up".)

Visualize the "center" of the image plane at camera_position + camera_direction. The up and right vectors lie in the image plane.

You can choose a rectangular section of the image plane to correspond to your screen. The ratio of the width or height of this rectangular section to the length of camera_direction determines the field of view. To zoom in you can increase camera_direction or decrease the width and height. Do the opposite to zoom out.

So given a pixel position (i, j), you want the (x, y, z) of that pixel on the image plane. From that you can subtract camera_position to get a ray vector (which then needs to be normalized).

Ray ComputeCameraRay(int i, int j) {
  const float width = 512.0;  // pixels across
  const float height = 512.0;  // pixels high
  double normalized_i = (i / width) - 0.5;
  double normalized_j = (j / height) - 0.5;
  Vector3 image_point = normalized_i * camera_right +
                        normalized_j * camera_up +
                        camera_position + camera_direction;
  Vector3 ray_direction = image_point - camera_position;
  return Ray(camera_position, ray_direction);
}

This is meant to be illustrative, so it is not optimized.

like image 145
Adrian McCarthy Avatar answered Oct 22 '22 23:10

Adrian McCarthy


For rasterising renderers, you tend to need a transformation matrix because that's how you map directly from 3D coordinates to screen 2D coordinates.

For ray tracing, it's not necessary because you're typically starting from a known pixel coordinate in 2D space.

Given the eye position, a point in 3-space that's in the center of the screen, and vectors for "up" and "right", it's quite easy to calculate the 3D "ray" that goes from the eye position and through the specified pixel.

I've previously posted some sample code from my own ray tracer at https://stackoverflow.com/a/12892966/6782

like image 20
Alnitak Avatar answered Oct 22 '22 23:10

Alnitak