Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Microsoft Kinect SDK depth data to real world coordinates

Tags:

kinect

I'm using the Microsoft Kinect SDK to get the depth and color information from a Kinect and then convert that information into a point cloud. I need the depth information to be in real world coordinates with the centre of the camera as the origin.

I've seen a number of conversion functions but these are apparently for OpenNI and non-Microsoft drivers. I've read that the depth information coming from the Kinect is already in millimetres, and is contained in the 11bits... or something.

How do I convert this bit information into real world coordinates that I can use?

Thanks in advance!

like image 668
Simon Trewhella Avatar asked Jan 09 '12 23:01

Simon Trewhella


1 Answers

This is catered for within the Kinect for Windows library using the Microsoft.Research.Kinect.Nui.SkeletonEngine class, and the following method:

public Vector DepthImageToSkeleton (
    float depthX,
    float depthY,
    short depthValue
)

This method will map the depth image produced by the Kinect into one that is vector scalable, based on real world measurements.

From there (when I've created a mesh in the past), after enumerating the byte array in the bitmap created by the Kinect depth image, you create a new list of Vector points similar to the following:

        var width = image.Image.Width;
        var height = image.Image.Height;
        var greyIndex = 0;

        var points = new List<Vector>();

        for (var y = 0; y < height; y++)
        {
            for (var x = 0; x < width; x++)
            {
                short depth;
                switch (image.Type)
                {
                    case ImageType.DepthAndPlayerIndex:
                        depth = (short)((image.Image.Bits[greyIndex] >> 3) | (image.Image.Bits[greyIndex + 1] << 5));
                        if (depth <= maximumDepth)
                        {
                            points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)x / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
                        }
                        break;
                    case ImageType.Depth: // depth comes back mirrored
                        depth = (short)((image.Image.Bits[greyIndex] | image.Image.Bits[greyIndex + 1] << 8));
                        if (depth <= maximumDepth)
                        {
                            points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)(width - x - 1) / image.Image.Width), ((float)y / image.Image.Height), (short)(depth << 3)));
                        }
                        break;
                }

                greyIndex += 2;
            }
        }

By doing so, the end result from this is a list of vectors stored in millimeters, and if you want centimeters multiply by 100 (etc.).

like image 56
LewisBenge Avatar answered Sep 28 '22 11:09

LewisBenge