Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does Kinect Infrared View Have an offset with the Kinect Depth View

Tags:

opencv

kinect

I am working on a Kinect project using the infrared view and depth view. In the infrared view, using CVBlob library, I am able to extract some 2D points of interest. I want to find the depth of these 2D points. So I thought that I can use the depth view directly, something like this:

coordinates3D[0] = coordinates2D[0];
coordinates3D[1] = coordinates2D[1];
coordinates3D[2] = (USHORT*)(LockedRect.pBits)
[(int)coordinates2D[1] * Width + (int)coordinates2D[0]] >> 3;

I don't think this is the right formula to get the depth. I am able to visualize the 2D points of interest in the depth view. If I get a point (x, y) in infrared view, then I draw it as a red point in the depth view at (x, y)
I noticed that the red points are not where I expect them to be (on an object). There is a systematic error in their locations.

I was of the opinion that the depth view and infrared views have one-to-one correspondence unlike the correspondence between the color view and depth view.
Is this indeed true or is there an offset between the IR and depth views? If there is an offset, can I somehow get the right depth value?

like image 860
Aparajith Sairam Avatar asked Jun 05 '13 18:06

Aparajith Sairam


People also ask

How does Kinect measure depth?

The depth sensor contains a monochrome CMOS sensor and infrared projector that help create the 3D imagery throughout the room. It also measures the distance of each point of the player's body by transmitting invisible near-infrared light and measuring its "time of flight" after it reflects off the objects.

Is Kinect a depth camera?

The device features an "RGB camera, depth sensor and microphone array running proprietary software", which provide full-body 3D motion capture, facial recognition and voice recognition capabilities.

How do I get Kinect depth data?

To get the depth data from the kinect, simply change the types of the framesource, framereader, and frame. Note also that the resolution of the depth camera is different from that of the color camera: 512*424 instead of 1920*1080.

How accurate is the Kinect?

An average depth accuracy error of under 2 mm was observed in the central region of the cone for the ranges from 0.5 to 3.0 m, 2 to 4 mm for ranges from 3.0 to 3.5 m, and over 4 mm for distances beyond 3.5 m. Accuracy evaluations of the Azure Kinect have been limited to date.


2 Answers

Depth and Color streams are not taken from the same point so they do not correspond to each other perfectly. Also they FOV (field of view) is different.

  1. cameras

    • IR/Depth FOV 58.5° x 45.6°
    • Color FOV 62.0° x 48.6°
    • distance between cameras 25mm
  2. my corrections for 640x480 resolution for both streams

    if (valid depth)
     {
     ax=(((x+10-xs2)*241)>>8)+xs2;
     ay=(((y+30-ys2)*240)>>8)+ys2;
     }
    
    • x,y are in coordinates in depth image
    • ax,ay are out coordinates in color image
    • xs,ys = 640,480
    • xs2,ys2 = 320,240

    as you can see my kinect has also y-offset which is weird (even bigger then x-offset). My conversion works well on ranges up to 2 m I did not measure it further but it should work even then

  3. do not forget to correct space coordinates from depth and depth image coordinates

    pz=0.8+(float(rawdepth-6576)*0.00012115165336374002280501710376283);
    px=-sin(58.5*deg*float(x-xs2)/float(xs))*pz;
    py=+sin(45.6*deg*float(y-ys2)/float(ys))*pz;
    pz=-pz;
    
    • where px,py,pz is point coordinate in [m] in space relative to kinect

    I use coordinate system for camera with opposite Z direction therefore the negation of sign

PS. I have old model 1414 so newer models have probably different calibration parameters

like image 194
Spektre Avatar answered Oct 12 '22 14:10

Spektre


There is no offset between the "IR View" and "Depth View". Primarily because they are the same thing.

The Kinect has 2 cameras. A RGB color camera and a depth camera, which uses an IR blaster to generate a field light field that is used when processing the data. These give you a color video stream and a depth data stream; there is no "IR view" separate from the depth data.

enter image description here

UPDATE:

They are actually the same thing. What you are referring to as a "depth view" is simply a colorized version of of the "IR view"; the black-and-white image is the "raw" data, while the color image is a processed version of the same data.

In the Kinect for Windows Toolkit, have a look in the KinectWpfViewers project (if you installed the KinectExplorer-WPF example, it should be there). In there is the KinectDepthViewer and the DepthColorizer classes. They will demonstrate how the colorized "depth view" is created.

UPDATE 2:

Per comments below what I've said above is almost entirely junk. I'll likely go edit it out or just delete my answer in full in the near future, until then it shall stand as a testament to my once invalid beliefs on what was coming from where.

Anyways... Have a look at the CoordinateMapper class as another possible solution. The link will take you to the managed code docs (which is what I'm familiar with), I'm looking around the C++ docs to see if I can find the equivalent.

I've used this to map the standard color and depth views. It may also map the IR view just as well (I wouldn't see why not), but I'm not 100% sure of that.

like image 41
Nicholas Pappas Avatar answered Oct 12 '22 14:10

Nicholas Pappas