Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Precision of the kinect depth camera

Tags:

kinect

depth

How precise is the depth camera in the kinect?

  • range?
  • resolution?
  • noise?

Especially I'd like to know:

  • Are there any official specs about it from Microsoft?
  • Are there any scientific papers on the subject?
  • Investigations from TechBlogs?
  • Personal experiments that are easy to reproduce?

I'm collecting data for about a day now, but most of the writers don't name their sources and the values seem quite to differ...

like image 949
Fabian Avatar asked Oct 08 '11 11:10

Fabian


People also ask

How accurate is Kinect?

Results. The Kinect measured timing of movement repetitions very accurately (low bias, 95% limits of agreement <10% of the group mean, ICCs >0.9 and Pearson's r > 0.9). However, the Kinect had varied success measuring spatial characteristics, ranging from excellent for gross movements such as sit-to-stand (ICC = .

What resolution is the Kinect camera?

The Kinect camera is an RGB device with a resolution of 640×480 and a 24 bit color range (Red- Green- Blue channels). Working at a rate of 30 frame captures per second this camera, this camera is similar to the run of the mill webcam or the sensors in your digital camera and it is, in most regards, very commonplace.

How do I get Kinect depth data?

To get the depth data from the kinect, simply change the types of the framesource, framereader, and frame. Note also that the resolution of the depth camera is different from that of the color camera: 512*424 instead of 1920*1080.

Can Kinect see in the dark?

It didn't miss a beat, even with several people moving around in the room. And if all of this wasn't enough, the Kinect for Xbox One can even see in the dark. We did a full lights-out test — no light in the room at all — and it had no problem perfectly tracking a subject's movements.


7 Answers

  • Range: ~ 50 cm to 5 m. Can get closer (~ 40 cm) in parts, but can't have the full view be < 50 cm.
  • Horizontal Resolution: 640 x 480 and 45 degrees vertical FOV and 58 degrees horizontal FOV. Simple geometry shows is about ~ 0.75 mm per pixel x by y at 50 cm, and ~ 3 mm per pixel x by y at 2 m.
  • Depth resolution: ~ 1.5 mm at 50 cm. About 5 cm at 5 m.
  • Noise: About +-1 DN at all depths, but DN to depth is non-linear. This means +-1 mm close, and +- 5 cm far.

There are official specs from the sensor developer, not from Microsoft. No scientific papers that I know of yet. Plenty of investigations and experiments (see Google). The OpenKinect has a lot more discussion on these things than this site for now.

like image 93
mankoff Avatar answered Oct 08 '22 10:10

mankoff


The Kinect for Windows SDK provide some constants which I've been using and seem to be consistent. For range and resolution, these values are:

In default mode:

  • Minimum range: 80 cm
  • Maximum range: 400 cm

In near mode:

  • Minimum range: 40 cm
  • Maximum range: 300 cm

For the color camera, you may have either of the following resolutions:

  • 80x60
  • 320x240
  • 640x480
  • 1280x960

For the depth camera, you may have either of the following resolutions:

  • 80x60
  • 320x240
  • 640x480

Confronting the information from Avada Kedavra (and from most sources, by the way), the values for the field of view given by the API are the following:

For the color camera:

  • Horizontal FOV: 62,0°
  • Vertical FOV: 48,6°

For the depth camera:

  • Horizontal FOV: 58,5°
  • Vertical FOV: 45,6°

Source: http://msdn.microsoft.com/en-us/library/hh855368

like image 44
Guilherme Avatar answered Oct 08 '22 10:10

Guilherme


The real question here was about resolution and precision. I care to chip in here as i find the resolution and precision to be not as good as stated. The maximum output of the depth resolution is indeed 640x480, however, this is not the effective resolution, and this is not exactly how precise it is.

The method in which the kinect works is based on structured light projection. A pattern of light is emitted and cast on the surface, which a camera sees and then triangulates each ray from the origin, bounced off the object, to the camera.

The thing is that this pattern consists out of only 34.749 bright spots that can be triangulated (http://azttm.wordpress.com/2011/04/03/kinect-pattern-uncovered/). If we relate this to a resolution of 640x480=307.200 data points, we notice a great difference. Ask yourself if the amount of data 10 times the amount of source-data-points can be seen as valid, and sampled efficiently. I doubt it. If you were to ask me what the effective resolution of the kinect is, i would guess it is around 240x180 of honest and pretty good data.

like image 31
TimZaman Avatar answered Oct 08 '22 09:10

TimZaman


According to Kinect tech spec finally revealed the specs for the depth field are (these match is also confirmed in the official programming guide as posted by Mannimarco):

* Horizontal field of view: 57 degrees
* Vertical field of view: 43 degrees
* Physical tilt range: ± 27 degrees
* Depth sensor range: 1.2m - 3.5m
* Resolution depth stream: 320x240 pixels
* Resolution color stream: 640x480 pixels

But from my own experience the depth sensor range is more like 0.8m-4.0m, at least I get good reading in this range. This range matches the Primesense data sheet posted by mankoff in the comments below.

It is also important to remember that the depth resolution is much higher close to the sensor than further away. At 3-4 meter the resolution is not nearly as good as at 1.5m. This becomes important if you, for example, want to calculate the normals of the surface. The result will be better closer to the sensor than further away.

Its not to hard to test the range yourself. The Official SDK (currently beta) will give you a a zero (0) depth when you are out of range. So, you could test this with a simple ruler, and test at what distance you get/dont get any reading larger than zero. I do not know how the OpenKinect SDK handles out-of-range readings.

A comment about noise: I would say that there is quite a bit of noise in the depth stream which makes it harder to work with. For example if you calculate the surface normals you can expect them to be a bit "jumpy" which of course will have a negative impact on fake lighting etc. Furthermore you have a parallax issue in the depth stream due to the distance between the IR transmitter and the receiver. This can also be hard to work with as it leave a large "shadow" in the depth data. This youtube video demonstrates the problem and discuss a way to resolve the issue using shaders. Its a video worth watching.

like image 25
Avada Kedavra Avatar answered Oct 08 '22 10:10

Avada Kedavra


I think it might be worth mentioning the paper of Khoshelham and Elbernik who did propose a theoretical random error model of the kinects depth sensor in Feb '12. It is called "Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications". The paper can be found here.

like image 40
SemtexB Avatar answered Oct 08 '22 09:10

SemtexB


If you're looking for something published by Microsoft, check out page 11 of the Kinect Programming Guide. It says pretty much the same thing everyone here has already mentioned.

  • Range: 1.2 to 3.5 meters
  • Viewing angle: 43° vertical by 57° horizontal
  • Mechanized tilt range: ±28°
  • Frame rate: 30 frames per second
  • Resolution, depth stream: 320 x 240 (it can actually go higher than this)
  • Resolution, color stream: 640 x 480 (again, it can go higher)

I don't see anything mentioning noise, but I can say it's pretty minimal except along surface edges where it can become more noticeable.

like image 20
Coeffect Avatar answered Oct 08 '22 09:10

Coeffect


My experience is that it is not that exact. It's pretty OK, but when you compare it to tape measure, then it is not exactly matching. I made an Excel with measurements for every 10mm, and it just doesn't hold up, especially things that are more then 2500mm away, but more closer too.

Keep also in mind that the actual depth pixels is a lot lower then is advertised. The electronics inside fills in the gasps, that's why you see small area artifacts, and not something like pixel data. In essence this means that 320x240 has 1/8 pixels covered by a "real" measurement, other pixels are calculated. So you could use 640x480; but it would only CPU/UBS resource and will not make your application see better.

That's just my two cents of experience, I'm programming robotics.

like image 31
user613326 Avatar answered Oct 08 '22 09:10

user613326