I'm getting a little bit confused here.
How does Kinect calculates depth: What I understand is that
Question 1: Does it consider the distance between IR projector and IR camera? I guess no because they are too close to be considered.
Question 2: Now we are getting the depth directly from the pattern. When are we using disparity map
to calculate depth?
The disparity map is basically the difference between the known and the observed pattern that you mention in the beginning. You use this during the depth computation.
The distance between the projector and the camera gets taken into account too.
Check out the following figure:
Pr is the position of a speckle in a reference depth Zr, and Po is the same speckle captured by the Kinect at a depth Zo (the depth we want to calculate). D is the 3D disparity between the 2 points, while d is the disparity on the 2D image plane. f is the focal length, and b is the is the distance between the camera C and the laser projector L.
As you mentioned, using similar triangles, the depth is calculated as:
The paper where the figure is from is Accuracy Analysis of Kinect Depth Data by K. Khoshelham. I'd suggest reading it for a more thorough explanation of the depth calculation process.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With