I have been looking at the Kinect for Windows release notes and features, since I want to incorporate gesture recognition in my project as well.
At the above page, the first line mentions that "The Kinect for Windows SDK enables developers to create applications that support gesture and voice recognition, ". Voice recognition API is available with the SDK and readily can be used. However, I don't think there are any gesture recognition APIs available in the SDK. The APIs of Skeleton Tracking are there to be used readily but then they have to be tailored with to get gesture recognition.
I have seen videos of Windows Media Center beng controlled by gestures etc. and other applications too. I wonder if all these applications are custom built and have to write their own gesture recognition code.
Currently, in my project I am using Kinect DTW Gesture Recognition from Codeplex. I am having two issues with this -> 1) Looks very performance hogging, and on enabling this with my app, my app throws OutofMemory exception after some time (PC specs are pretty high). 2) Can't say much about the robustness of the system. Works at times for some people and not for others.
I thought if the APIs would have been inbuilt, it would have been good to switch to these. Are these available or else what's the resolution?
Manufacturing of the Kinect for Windows has been discontinued.
Kinect is Microsoft's motion sensor add-on for the Xbox 360 gaming console. The device provides a natural user interface (NUI) that allows users to interact intuitively and without any intermediary device, such as a controller.
Applications created using the Kinect for Windows SDK 2.0 also work on computers running Windows 10 Home Edition.
I'm actually doing this right now for a school project. We had to create our own Gesture Recognition module. There's not anything in the API that will do it for you, but it provides you with all the pieces you'll need to build the capability up.
This article was a big help, http://blogs.msdn.com/b/mcsuksoldev/archive/2011/08/08/writing-a-gesture-service-with-the-kinect-for-windows-sdk.aspx. It talks about how to break down gestures. If you only have a handful of gestures that you can hard-code, its trivial. We needed the ability to load in and recognize user defined gestures, but this article describes the basic framework we used to get there.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With