I am tracking a ball using the rgb data from kinect. After this I look up the corresponding depth data. Both of this is working splendid. Now I want to have the actual x,y,z world coordinates (i.e skeleton Space) instead of the x_screen, y_screen and depth values. Unfortunately the methods given by the kinect sdk (http://msdn.microsoft.com/en-us/library/hh973078.aspx) don`t help me. Basically i need a function "NuiImageGetSkeletonCoordinatesFromColorPixel" but i does not exist. All the functions basically go in the opposite direction
I know this can probably be done with openNI but i can not use it for other reasons.
Is there a function that does this for me or do i have to do the conversion myself? If I have to do it myself, how would i do this? I sketched up a little diagram http://i.imgur.com/ROBJW8Q.png - do you think this would work?
Check the CameraIntrinsics.
typedef struct _CameraIntrinsics
{
float FocalLengthX;
float FocalLengthY;
float PrincipalPointX;
float PrincipalPointY;
float RadialDistortionSecondOrder;
float RadialDistortionFourthOrder;
float RadialDistortionSixthOrder;
} CameraIntrinsics;
You can get it from ICoordinateMapper::GetDepthCameraIntrinsics
.
Then, for every pixel (u,v,d)
in depth space, you can get the coordinate in world space by doing this:
x = (u - principalPointX) / focalLengthX * d;
y = (v - principalPointY) / focalLengthY * d;
z = d;
For color space pixel, you need to first find its associated depth space pixel, which you should use ICoordinateMapper::MapCameraPointTodepthSpace
. Since not all color pixel has its associated depth pixel (1920x1080 vs 512x424), you can't have the full-HD color point cloud.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With