Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

References for depth of field implementation in a raytracer?

I have a basic raytracer and I want to implement depth of field. Can you please recommend me resources I can use, books and code?

Thanks

like image 672
user1796942 Avatar asked Nov 23 '12 16:11

user1796942


1 Answers

I figured it out from the little bit of information on this page: http://cg.skeelogy.com/depth-of-field-using-raytracing/, specifically the diagrams near the bottom. I think I did it a little bit differently then how it's shown, but the concept is pretty simple.

I can explain the general idea of what is going on and how to implement it (I'll try to be concise). Light reflects off of any given point in all directions (generally speaking), so it's not actually a single ray going between the render-pt and your eye, it's a cone of light leaving the render-pt and expanding towards the eye. The lens of your eye/camera will tend to bend these light rays such that the cone stops expanding and starts contracting again. For things to be perfectly in focus, the cone should contract to a point on your retina/frame, but this only works at one specific distance from the lens: a distance indicated by the "focal plane" in the referenced page (though I think it should really be a sphere centered on the eye, not a plane).

For anything in front of the focal plane, the cone of light will be bent more: it will focus to a point in front of the retina/frame, and then start expanding again so by the time it reaches the frame, it is no longer a point, but a circle. Similarly, for points behind the focal plane, the cone will be bent less and will not yet have converged to a point when it reaches the frame. In both cases, the effect is that a single point is the scene ends up smeared out across multiple pixels.

For implementation, you can kind of turn this idea on it's head: in stead of rendering every point in the scene to several pixels, you can render several nearby points to a single pixel, which is of course what would really happen since the "smeared out" light circles from neighboring points will end up overlapping and therefore each contribute to a pixel.

So here's how I implemented it:

First, define an aperture: a planar area center on your eye and parallel to the retina/frame. The bigger the aperture, the more DOF effect will be evident. Aperture's are typically just circles, in which case it is easily defined by its radius. Other shapes can lead to different lighting effects.

Also define a "focal distance". I don't think that's actually the correct term for it, but it's the distance from the eye at which things will be perfectly in focus.

To render each pixel:

  1. Start by casting a ray like normal from the eye through the pixel out into the scene. Instead of intersecting it with objects in the scene, though, you just want to find the point on the ray for which the distance from the eye is equal to the selected focal distance. Call this point the focal point for the pixel.
  2. Now select a random starting point on the aperture. For a circular aperture, it's pretty easy, you can select a random polar angle and a random radius (no greater than the radius of the aperture). You want a uniform distribution over the entire aperture, don't try to bias it towards the center or anything.
  3. Cast a ray from your selected point on the aperture through the focal point. Note that it will not necessarily pass through the same pixel, that's ok.
  4. Render this ray the way your normally would (e.g., path tracing, or just finding the nearest point of intersection, etc.).
  5. Repeat steps 2, 3, and 4 some number of times, using different a random starting point on the aperture each time, but always casting it through the focal point. Sum up the rendered color values from all of the rays and use that as the value for this pixel (as usual, divide by a constant attenuation factor if necessary).

The more rays you use for each pixel, the better the quality will be, of course. I've been using about 150 rays per pixel to get decent but not great quality. You can see the effect with quite a bite fewer (say like 50 or 60 rays), but fewer rays will tend to produce a graininess in the image, especially for things that are very out of focus. The number of rays you need also depends on the aperture size: a smaller aperture won't require as many rays, but you won't get as much blurring effect.

Obviously, you're greatly increasing your work load by doing this, essentially multiplying it by the number of rays per pixel, so if you have any optimizations left to make in your raytracer, now would be a good time to do it. The good news if you happen to have multiple processors available, is that this is embarrassingly parallel once you found the focal point for a pixel.

A little more explanation

The image below should give you an idea of what's happening, and why it works out to be equivalent to what really occurs in an eye or camera. It shows two pixels being rendered, one pixel illustrated in red, the other in blue. The dashed lines extending from the eye through each pixel to the focal "plane" are the rays you cast at the beginning to determine the focal point for the pixel. The semi-transparent cones indicate the complete set of rays which could possibly be selected at random to render each pixel (red cone for pixel-1, blue cone for pixel 2). Notice that since all rays pass through the focal point, each cone converges to a point exactly at the focal point.

The overlapping areas of the cones represent points in the scene which could end up being rendered to both pixel-1 and pixel-2: in other words, smeared out. Since every cone is a point on the focal "plane", there is no overlap between cones here, so points at this distance are only rendered to a single pixel: they are perfectly in focus. Meanwhile, the further you get away from the focal "plane" (either forwards or backwards), the more the cones spread out, so the more cones will overlap at any given point. Therefore, points that are very close or very far away will tend to be rendered to a large number of different pixels, so they will be very out of focus.

Model of DOF implementation for raytracing.

like image 98
brianmearns Avatar answered Sep 28 '22 12:09

brianmearns