I'm running the following code. The goal is to detect if the picture "card" is present on the "board", which is the screenshot that could contain that card.
The detection works nearly perfect, but when I try to draw it I notice that some lines are too far apart.
While the dots coming from the object are in a perfect distance from the scene, in the "scene" they are often too far apart, giving a wrong result.
As you can tell from the following screenshot. The object is detected in the scene, but many lines are way out of position. I wish too just drop the lines that are too far apart.
I think with my function it removes any lines that are too far apart when you compare the starting points on the object. However, this doesn't seem to happen for the points that are too far apart on the scene. How could I remove those?
bool isCardOnBoard(Mat card, string filename) {
//-- Step 1: Detect the keypoints using SURF Detector
vector<KeyPoint> keypoints_object;
detector.detect( card, keypoints_object );
//-- Step 2: Calculate descriptors (feature vectors)
Mat descriptors_object;
extractor.compute( card, keypoints_object, descriptors_object );
//-- Step 3: Matching descriptor vectors using FLANN matcher
// FlannBasedMatcher matcher;
BFMatcher matcher(extractor.defaultNorm(), false);
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// printf("-- Max dist : %f \n", max_dist );
// printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors_object.rows; i++ ) {
if( matches[i].distance < 3*min_dist)
good_matches.push_back( matches[i]);
}
if (good_matches.size() > 100) {
cout << filename << " NOT on the board" << endl;
return false;
}
Mat img_matches;
drawMatches( card, keypoints_object, board, keypoints_scene,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
// vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Localize the object
std::vector<Point2f> obj;
std::vector<Point2f> scene;
// cout << good_matches.size() << endl;
for( int i = 0; i < good_matches.size(); i++ ) {
//-- Get the keypoints from the good matches
obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( obj, scene, RANSAC );
//-- Get the corners from the image_1 ( the object to be "detected" )
std::vector<Point2f> obj_corners(4);
obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( card.cols, 0 );
obj_corners[2] = cvPoint( card.cols, card.rows ); obj_corners[3] = cvPoint( 0, card.rows );
std::vector<Point2f> scene_corners(4);
perspectiveTransform( obj_corners, scene_corners, H);
//-- Draw lines between the corners (the mapped object in the scene - image_2 )
line( img_matches, scene_corners[0] + Point2f( card.cols, 0), scene_corners[1] + Point2f( card.cols, 0), Scalar(0, 255, 0), 4 );
line( img_matches, scene_corners[1] + Point2f( card.cols, 0), scene_corners[2] + Point2f( card.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[2] + Point2f( card.cols, 0), scene_corners[3] + Point2f( card.cols, 0), Scalar( 0, 255, 0), 4 );
line( img_matches, scene_corners[3] + Point2f( card.cols, 0), scene_corners[0] + Point2f( card.cols, 0), Scalar( 0, 255, 0), 4 );
//-- Show detected matches
imshow( "Good Matches & Object detection", img_matches );
waitKey(0);
return true;
}
"The lines that are too far apart" as you said represent outliers: False positive matches from matcher.match( descriptors_object, descriptors_scene, matches );
When you estimate the homography H, you internally use statistical methods in order to reject those outliers. The method used here is called RANSAC. Another method available in OpenCV function is LMeDS. As explain in OpenCV documentation: The method RANSAC can handle practically any ratio of outliers but it needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers.
I suggest you try differents thresholds for RANSAC or try LMeDS instead. Note that the printed characters in the scene will surely give you outliers..
If you just want to "drop the lines that are too far apart" (why?), you may want to draw only lines from matches in the re-projected object (?)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With