I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around.
My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion.
So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones.
So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.
Motion detection algorithm As an input, we receive a stream of frames (images) captured from a video source (for example, from a video file or a web camera). The algorithm should gather information about moving objects (size, trajectory, etc.).
Let's understand them in steps: First, we will start capturing video using the cv2 module and store that in the video variable. Then we will use an infinite while loop to capture each frame from the video. We will use the read() method to read each frame and store them into respective variables.
An important stream of research within Computer Vision that has gained a lot of importance is Motion Detection. Motion detection[1] is the process of detecting a change in position of an object relative to its surroundings or the change in the surroundings relative to an object.
there's an open source (free for private use) project that utilizes FAST corner detection here: http://www.hatzlaha.co.il/150842/FAST-Corner-V2
It could be very useful for object detection, and has undergone serious optimizations to reach beautiful non-jittery results.
-- EDIT --
Now there's a Lucas-Kanade Optical Flow project as well - http://www.success-ware.com/150842/Lucas-Kanade-Detection-for-the-iPhone You can download the source code, and there's a link to the AppStore as well, so you can play around with it and see if it answers your needs.
HTH,
Oded.
Why not use the combo of the accelerometer/gyro motion sensing, and a very low res image tracker? Each method seems to be confused by completely different user motions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With