Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Finger/Hand Gesture Recognition using Kinect

Let me explain my need before I explain the problem. I am looking forward for a hand controlled application. Navigation using palm and clicks using grab/fist.

Currently, I am working with Openni, which sounds promising and has few examples which turned out to be useful in my case, as it had inbuild hand tracker in samples. which serves my purpose for time being.

What I want to ask is,

1) what would be the best approach to have a fist/grab detector ?

I trained and used Adaboost fist classifiers on extracted RGB data, which was pretty good, but, it has too many false detections to move forward.

So, here I frame two more questions

2) Is there any other good library which is capable of achieving my needs using depth data ?

3)Can we train our own hand gestures, especially using fingers, as some paper was referring to HMM, if yes, how do we proceed with a library like OpenNI ?

Yeah, I tried with the middle ware libraries in OpenNI like, the grab detector, but, they wont serve my purpose, as its neither opensource nor matches my need.

Apart from what I asked, if there is something which you think, that could help me will be accepted as a good suggestion.

like image 762
4nonymou5 Avatar asked Feb 14 '14 11:02

4nonymou5


People also ask

What is gesture recognition example?

For example, imagine being able to check your home security camera as you drive home by simply making a hand gesture. Gestures could also be coupled with telematics systems, allowing the vehicle to provide information about nearby landmarks if it recognizes that an occupant is pointing at it.


2 Answers

You don't need to train your first algorithm since it will complicate things. Don't use color either since it's unreliable (mixes with background and changes unpredictably depending on lighting and viewpoint)

  1. Assuming that your hand is a closest object you can simply segment it out by depth threshold. You can set threshold manually, use a closest region of depth histogram, or perform connected component on a depth map to break it on meaningful parts first (and then select your object based not only on its depth but also using its dimensions, motion, user input, etc). Here is the output of a connected components method: depth imageconnected componentshand mask improved with grab cut
  2. Apply convex defects from opencv library to find fingers;

  3. Track fingers rather than rediscover them in 3D.This will increase stability. I successfully implemented such finger detection about 3 years ago.

like image 94
Vlad Avatar answered Oct 11 '22 10:10

Vlad


Read my paper :) http://robau.files.wordpress.com/2010/06/final_report_00012.pdf

I have done research on gesture recognition for hands, and evaluated several approaches that are robust to scale, rotation etc. You have depth information which is very valuable, as the hardest problem for me was to actually segment the hand out of the image.

My most successful approach is to trail the contour of the hand and for each point on the contour, take the distance to the centroid of the hand. This gives a set of points that can be used as input for many training algorithms.

I use the image moments of the segmented hand to determine its rotation, so there is a good starting point on the hands contour. It is very easy to determine a fist, stretched out hand and the number of extended fingers.

Note that while it works fine, your arm tends to get tired from pointing into the air.

like image 4
Rob Audenaerde Avatar answered Oct 11 '22 12:10

Rob Audenaerde