For example, I am standing in-front of my Kinect. The Kinect can identify the joints, and it will expose them as a data structure. Till this point I am clear.
So, can we define the height as the difference between Head joint - ((LeftAnkle + RightAnkle)/2)?
I have tried trigonometric formulas, but there are two problems I am facing. One is identifying the person in the view. The second one is identifying the exact positions of Top of head and bottom of foot.
I have tried the point cloud, but got lost in how to generate the point cloud specific to a person. I mean without including the background objects.
Please suggest some ideas about how I can calculate the height of a person using the Kinect?
You can convert the Head Joint into global coordinate system. There is no need to do any math. The y coordinate in global coordinate will be his height.
All you need to do is check what pixel the head joint is and convert the pixel + depth informations into word coordinate space in mm.
I don't know what API you are using, but if it's being capable to segment a human and return his joint's, probably you are using OpenNI/NITE or Microsoft SDK. Both of them have a function that converts a pixel + depth coordinate into a x,y,z in mm. I don't know exactly what are the functions but their names would be something like : depth_to_mm, or disparity_to_mm. You need to check both documentations to find it, or you can do it by yourself. This site have informations on how to convert depth to mm: http://nicolas.burrus.name/index.php/Research/KinectCalibration
I have extracted the two points - Head and Left foot (or Right Foot), then i found the euclidean distance between these points gave the distance with 4 inch variation. My test results are satisfactory, so we are using this approach as temporary work around.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With