Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to implement Depth of Field in Ray Tracer?

Can someone help me with the depth of field implementation in Ray Tracer please?

I am using a simple pin-hole camera model as shown below. I need to know how we can generate DOF effect using pin-hole camera model? (image is taken from wikipedia)

enter image description here

My basic ray tracer in working fine.

I have eye at (0,0,0,1) with direction at (dx, dy , 1.0f, 0.0f) where

float dx = (x * (1.0 / Imgwidth) ) - 0.5;
float dy = (y * (1.0 / Imgheight) ) - 0.5;

Now everywhere I read they are talking about sampling a lens which should be placed between the image plane and the Scene. for example as shown below(image taken from wikipedia):

How can I introduce a lens in front of an image plane if the rays are coming from one single point position(camera or eye)?

If someone can help than it will be great!

thank you

like image 894
sinner Avatar asked Apr 04 '12 13:04

sinner


2 Answers

There are 3 ways to do this:

  1. The physically-correct DOF would require multiple renders of the scene. Cameras have depth-of-field because they are not really a pinhole model. Instead, they have an aperture that allows light in within a certain diameter. This is equivalent to taking a pinhole camera and taking a lot of pictures within that aperture and averaging them.

    So basically, you need to rotate your camera slightly multiple times around your focus point, render the entire scene, accumulate the output colour in a buffer and divide all values by the number of renders.

  2. A simple post processing effect - render not only the scene colour, but also its depth, then use this depth to control blur effect strength. Note that this is technique requires some tricks to get seamless transitions between objects at different blur levels.

  3. A more complex post processing effect - create a depth buffer as previously and then use it to render an aperture-shaped particle for every pixel of the original scene. Use the depth to control the particle size just as you would use it for blur effect strength.

(1) gives best results, but is the most expensive technique; (2) is cheapest, (3) is quite tricky but provides good cost-effect balance.

like image 127
IneQuation Avatar answered Nov 09 '22 15:11

IneQuation


Here is the code I wrote to generate DOF.

void generateDOFfromEye(Image& img, const Camera& camera, Scene scene, float focusPoint)
{
    float pixelWidth = 1.0f / (float) img.width;
    float pixelHeight = 1.0f / (float) img.height;

    for (int y = 0; y < img.height; ++y)
        {
        for (int x = 0; x < img.width; ++x)
            {
            Color output(0,0,0,0);
            img(x, y) = Color(0,0,0,0);

            //Center of the current pixel
            float px = ( x * pixelWidth) - 0.5;
            float py = ( y * pixelHeight) - 0.5;

            Ray cameraSpaceRay = Ray(Vector(0,0,0,1), Vector(px, py, 1.0f, 0.0f));

            Ray ray = camera.Transform() * cameraSpaceRay;

            int depth = 0;
            int focaldistance = 2502;
            Color blend(0,0,0,0);



             //Stratified Sampling i.e. Random sampling (with 16 samples) inside each pixel to add DOF
                for(int i = 0; i < 16; i++)
                {
                //random values between [-1,1]
                float rw = (static_cast<float>(rand() % RAND_MAX) / RAND_MAX) * 2.0f - 1.0f;
                float rh = (static_cast<float>(rand() % RAND_MAX) / RAND_MAX) * 2.0f - 1.0f;
                // Since eye position is (0,0,0,1) I generate samples around that point with a 3x3 aperture size window.
                float dx =  ( (rw) * 3 * pixelWidth) - 0.5;
                float dy =  ( (rh) * 3 * pixelHeight) - 0.5;

                //Now here I compute point P in the scene where I want to focus my scene
                Vector P = Vector(0,0,0,1) + focusPoint * ray.Direction();
                Vector dir = P - Vector(dx, dy, 0.0f, 1.0f);


                ray  = Ray(Vector(dx,dy,0.0f,1.0f), dir);
                ray = camera.Transform() * ray;

                //Calling the phong shader to render the scene
                blend += phongShader(scene, ray, depth, output);

                }
            blend /= 16.0f;

            img(x, y) += blend;
            }
        } 
}

Now I don't see anything wrong here in the code. But the result I am getting is just a blurred image for the value of focuspoint > 500 as shown below: enter image description here

If you can tell what is wrong in this code then it will be very helpful :) Thanks!

like image 3
sinner Avatar answered Nov 09 '22 15:11

sinner