Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV 2.2 SURF Feature matching problems

Matching with nothing in the top right corner

I have modified the OpenCV demo application "matching_to_many_images.cpp" to query a image (left) to a frames from the webcam (right). What have gone wrong with the top right corner of the first image?

We think this is related to another problem we have. We begin with an empty database and we only add unique (features that not match the features in our database) but after adding only three features, we get a match on all new features....

we are using: SurfFeatureDetector surfFeatureDetector(400,3,4); SurfDescriptorExtractor surfDescriptorExtractor; FlannBasedMatcher flannDescriptorMatcher;

Complete code can be found at: http://www.copypastecode.com/71973/

like image 804
Maidenone Avatar asked May 30 '11 08:05

Maidenone


People also ask

What is Flann feature matching?

FLANN (Fast Library for Approximate Nearest Neighbors) is an image matching algorithm for fast approximate nearest neighbor searches in high dimensional spaces. These methods project the high-dimensional features to a lower-dimensional space and then generate the compact binary codes.

How does brute force matching work?

Brute-Force matcher is simple. It takes the descriptor of one feature in first set and is matched with all other features in second set using some distance calculation. And the closest one is returned.

What is feature matching?

Feature matching means finding corresponding features from two similar datasets based on a search distance. One of the datasets is named source and the other target, especially when the feature matching is used to derive rubbersheet links or to transfer attributes from source to target data.


2 Answers

I think this has to do with the border keypoints. The detector detects the keypoints, but for the SURF descriptor to return consistent values it needs pixel data in a block of pixels around it, which is not available in the border pixels. You can use the following snippet to remove border points after keypoints are detected but before descriptors are computed. I suggest using borderSize of 20 or more.

removeBorderKeypoints( vector<cv::KeyPoint>& keypoints, const cv::Size imageSize, const boost::int32_t borderSize )
{
    if( borderSize > 0)
    {
        keypoints.erase( remove_if(keypoints.begin(), keypoints.end(),
                               RoiPredicatePic((float)borderSize, (float)borderSize,
                                            (float)(imageSize.width - borderSize),
                                            (float)(imageSize.height - borderSize))),
                     keypoints.end() );
    }
}

Where RoiPredicatePic is implemented as:

struct RoiPredicatePic
{
    RoiPredicatePic(float _minX, float _minY, float _maxX, float _maxY)
    : minX(_minX), minY(_minY), maxX(_maxX), maxY(_maxY)
    {}

    bool operator()( const cv::KeyPoint& keyPt) const
    {
        cv::Point2f pt = keyPt.pt;
        return (pt.x < minX) || (pt.x >= maxX) || (pt.y < minY) || (pt.y >= maxY);
    }

    float minX, minY, maxX, maxY;
};

Also, approximate nearest neighbor indexing is not the best way to match features between pairs of images. I would suggest you to try other simpler matchers.

like image 174
KMS Avatar answered Sep 21 '22 04:09

KMS


Your approach is working flawless but it shows wrong results because of calling incorrectly the drawMatches function.

Your incorrect call was something like this:

drawMatches(image2, image2Keypoints, image1, image1Keypoints, matches, result);

The correct call should be:

drawMatches(image1, image1Keypoints, image2, image2Keypoints, matches, result);
like image 32
Diego Cerdan Puyol Avatar answered Sep 21 '22 04:09

Diego Cerdan Puyol