I copied the code of the Feature Matching with FLANN from the OpenCV tutorial page, and made the following changes:
I modified the check for a 'good match'. Instead of
if( matches[i].distance < 2*min_dist )
I used
if( matches[i].distance <= 2*min_dist )
otherwise I would get zero good matches when comparing an image with itself.
Modified parameter in drawing the keypoints:
drawMatches( img1, k1, img2, k2,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::DEFAULT);
I extracted the SIFT from all the images in the folder Ireland of the INRIA-Holidays dataset. Then I compared each image to all the others and draw the matches.
However there is a strange problem I have never experienced with any other SIFT/Matcher implementation I used in the past:
Is there anyone who used the same code from the OpenCV tutorial and can report a different experience from mine?
OpenCV is available for both Python and C++, making it a popular choice for cross-platform development. Now that you know that feature matching is comparing the features of two images which may be different in orientations, perspective, lightening, or even differ in sizes and colors.
OpenCV: Feature Matching with FLANN OpenCV 3.4.16-dev Open Source Computer Vision OpenCV Tutorials 2D Features framework (feature2d module) Feature Matching with FLANN
OpenCV is a library of computer vision algorithms that can be used to perform a wide variety of tasks, including feature matching. OpenCV is available for both Python and C++, making it a popular choice for cross-platform development.
We will see how to match features in one image with others. Brute-Force matcher is simple. It takes the descriptor of one feature in first set and is matched with all other features in second set using some distance calculation.
Checkout the matcher_simple.cpp example. It uses a brute force matcher that seems to work pretty well. Here is the code:
// detecting keypoints
SurfFeatureDetector detector(400);
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
// computing descriptors
SurfDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
// matching descriptors
BFMatcher matcher(NORM_L2);
vector<DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
// drawing the results
namedWindow("matches", 1);
Mat img_matches;
drawMatches(img1, keypoints1, img2, keypoints2, matches, img_matches);
imshow("matches", img_matches);
waitKey(0);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With