Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to do Gesture Recognition using Accelerometers

My goal is to recognize simple gestures from accelerometers mounted on a sun spot. A gesture could be as simple as rotating the device or moving the device in several different motions. The device currently only has accelerometers but we are considering adding gyroscopes if it would make it easier/more accurate.

Does anyone have recommendations for how to do this? Any available libraries in Java? Sample projects you recommend I check out? Papers you recommend?

The sun spot is a Java platform to help you make quick prototypes of systems. It is programmed using Java and can relay commands back to a base station attached to a computer. If I need to explain how the hardware works more leave a comment.

like image 263
smaclell Avatar asked Oct 13 '08 20:10

smaclell


People also ask

How is gesture recognition done?

Gesture recognition is technology that uses sensors to read and interpret hand movements as commands. In the automotive industry, this capability allows drivers and passengers to interact with the vehicle — usually to control the infotainment system without touching any buttons or screens.

Which sensor is used to recognize the gestures?

The sensor-based solutions use accelerometers or gyros for detecting the gestures. This paper presents a hand gesture recognition system built around an accelerometer sensor. The system is made by a sensing and transmitting part and a computer.


2 Answers

The accelerometers will be registering a constant acceleration due to gravity, plus any acceleration the device is subjected to by the user, plus noise.

You will need to low pass filter the samples to get rid of as much irrelevant noise as you can. The worst of the noise will generally be higher frequency than any possible human-induced acceleration.

Realise that when the device is not being accelerated by the user, the only force is due to gravity, and therefore you can deduce its attitude in space. Moreover, when the total acceleration varies greatly from 1g, it must be due to the user accelerating the device; by subtracting last known estimate of gravity, you can roughly estimate in what direction and by how much the user is accelerating the device, and so obtain data you can begin to match against a list of known gestures.

With a single three-axis accelerometer you can detect the current pitch and roll, and also acceleration of the device in a straight line. Integrating acceleration minus gravity will give you an estimate of current velocity, but the estimate will rapidly drift away from reality due to noise; you will have to make assumptions about the user's behaviour before / between / during gestures, and guide them through your UI, to provide points where the device is not being accelerated and you can reset your estimates and reliably estimate the direction of gravity. Integrating again to find position is unlikely to provide usable results over any useful length of time at all.

If you have two three-axis accelerometers some distance apart, or one and some gyros, you can also detect rotation of the device (by comparing the acceleration vectors, or from the gyros directly); integrating angular momentum over a couple of seconds will give you an estimate of current yaw relative to that when you started integrating, but again this will drift out of true rapidly.

like image 164
moonshadow Avatar answered Sep 19 '22 01:09

moonshadow


Since no one seems to have mentioned existing libraries, as requested by OP, here goes:

http://www.wiigee.org/

Meant for use with the Wiimote, wiigee is an open-source Java based implementation for pattern matching based on accelerometer readings. It accomplishes this using Hidden Markov Models[1].

It was apparently used to great effect by a company, Thorn Technologies, and they've mentioned their experience here : http://www.thorntech.com/2013/07/mobile-device-3d-accelerometer-based-gesture-recognition/

Alternatively, you could consider FastDTW (https://code.google.com/p/fastdtw/). It's less accurate than regular DTW[2], but also computationally less expensive, which is a big deal when it comes to embedded systems or mobile devices.

[1] https://en.wikipedia.org/wiki/Hidden_Markov_model
[2] https://en.wikipedia.org/wiki/Dynamic_time_warping

EDIT: The OP has mentioned in one of the comments that he completed his project, with 90% accuracy in the field and a sub-millisecond compute time, using a variant of $1 Recognizer. He also mentions that rotation was not a criteria in his project.

like image 41
Abraham Philip Avatar answered Sep 23 '22 01:09

Abraham Philip