I'm trying to get started with Kinect, and it has a depth sensing camera, but I've seen no guidance on measuring width/height/lengths.
Is it a matter of working out the distance an object is away from the camera (depth sensor) and at that range the field of view of the Kinect, and then working out how many pixels your object takes up?
I'd like to be able to do a mesh or something from a dot-cloud and I'm having troubles figuring where to start and how to get proper width/height measurements for objects.
This is a rather complex task and cannot be answered with a few paragraphs here on Stackoverflow. The reason is that its a lot of knowledge that builds on other knowledge. I would start by reading up on Linear Algebra using for example the excellent Rorres, et al.
Creating the mesh from the point-cloud is a complex task and there is no defacto algorithm used today. The most popular approach seems to be to first create a discretized Truncated Signed Distance Function (TSDF) and then use e.g. Marching Cubes to get a mesh. Another algorithm is the Delaunay triangulation.
There is also a c# implementation provided by the s-hull project.
In the book Beginning Kinect Programming with the Microsoft Kinect SDK by Jarrett Webb,James Ashley you have in chapter 3 a sample of how to calculate width and height and distance :
http://books.google.es/books?id=MupB_VAmtdEC&pg=PA69&hl=es&source=gbs_toc_r&cad=4#v=onepage&q&f=false
The code is available to download at apress.com
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With