Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Detecting objects in OpenCV and real time comparison

We are making an autonomous robot (in college event) that follows certain signs and directions and goes through a directed route. The robot will have a camera mounted at head. It will follow the signs drawn on the road ahead of it, or the walls and accordingly make its decision. The signs will be GREEN ARROWS (For GO Signal), or RED T's as a sign for halt. The robot will scan these symbols in Real Time and perform the necessary action. These signs can be on wall directly in front or drawn on the path ahead.

I have tried looking for necessary Image transform algorithms or method, but we are quite new into this field. I seek you help on how could this problem be tackled and necessary code that may help us (Assuming us to be starters).

I have looked into following threads but I'm confused : - OpenCV Object Detection - Center Point - How to recognize rectangles in this image? - http://www.chrisevansdev.com/computer-vision-opensurf.html (I'm not able to use it)

One of the hints given for the event was that we can model the arrows as a rectangle and triangle put together as find whether the center of triangle is to the right of that rectangle (which means to go right) or otherwise. Similarly for the T's.

Thanks! :)

like image 897
Vivek Rai Avatar asked Dec 17 '12 14:12

Vivek Rai


2 Answers

If the signs are previously known you can use the method "recognize objects by feature detection".

The idea is that you have a picture of the sign (the arrow or the T) and you perform the following training steps, offline:

1 - Feature Detection (using, SURF, FAST, ...)

2 - Descriptor Extraction (from the features) using SIFT, FREAK, etc...

Then it comes the real time part. For every frame you have to perform feature detection and descriptor extraction, but then you need to do the matching to the training images to see which object you got. An example that will work real time:

cv::FAST detector;
cv::FREAK descriptor;
BFMatcher matcher = BFMatcher(NORM_HAMMING,false);

detector.detect(frame,keypoints_frame);
descriptor.compute(frame, keypoints_frame,descriptors_frame);
matcher.match(descriptors_trainning, descriptors_frame);

That would be a first approach for the matching, then you need to refine and remove outliers. Some techniques are

  • Ratio Test

  • Cross Check

  • RANSAC + homography

Here you got a complete example.

like image 125
Jav_Rock Avatar answered Oct 20 '22 05:10

Jav_Rock


I assume you can get the signs before the event : take the arrow sign, and get "sift descriptors" from him and store them in your robot.

Then in each frame that the robot aquires look for the color of the sign, when you see something that resembles to a sign, take sift descriptors and try to register between the stored descriptors and the new ones. if you succeed try to calculate the rotation and translation matrices between the original stored sign, and the sign you found on the image.

To read about sift, i would recommend on this site : http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/ After you will understand the basics of sift, i recommend to download some implementation instead of implementing it yourself, it's the very tedious job and has many pitfalls

BTW Although sift is "Scale Invariant Feature Transform" i'm pretty sure that it will work in your case also even though you have "perspective transform" also.

Hope it helped

like image 29
OopsUser Avatar answered Oct 20 '22 05:10

OopsUser