[Request you to read question details before marking it duplicate or down-voting it. I have searched thoroughly and couldn't find a solution and hence posting the question here.]
I am trying to compare one image with multiple images and get a list of ALL matching images. I do NOT want to draw keypoints between images.
My solution is based on the following source code:
https://github.com/Itseez/opencv/blob/master/samples/cpp/matching_to_many_images.cpp
The above source code matches one image with multiple images and get best matching image.
I have modified the above sample and generated:
vector<vector<DMatch>> matches;
vector<vector<DMatch>> good_matches;
Now my question is how do I apply nearest neighbor search ratio to get good matches for multiple images?
Edit 1:
My implementation is as follows:
For each image in the data-set, compute SURF descriptors.
Combine all the descriptors into one big matrix.
Build a FLANN index from the concatenated matrix.
Compute descriptors for the query image.
Run KNN search over the FLANN index to find top 20 or less best matching image. K is set as 20.
Filter out all the inadequate matches computed in the previous step. (How??)
I have successfully done steps number 1 to 5. I am facing problem in step number 6 wherein I am not able to remove false matches.
FLANN (Fast Library for Approximate Nearest Neighbors) is an image matching algorithm for fast approximate nearest neighbor searches in high dimensional spaces. These methods project the high-dimensional features to a lower-dimensional space and then generate the compact binary codes.
FLANN stands for Fast Library for Approximate Nearest Neighbors. It contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features. It works faster than BFMatcher for large datasets.
Brute Force Matcher is used for matching the features of the first image with another image. It takes one descriptor of first image and matches to all the descriptors of the second image and then it goes to the second descriptor of first image and matches to all the descriptor of the second image and so on.
There are two answers to your problem. The first is that you should be using a completely different technique, the second answer is how to actually do what you asked for.
You want to find duplicates of a given query image. Traditionally, you do this by comparing global image descriptors, not local feature descriptors.
The simplest way to do this would be by aggregating the local feature descriptors into a local descriptor. The standard method here is "bag of visual words". In OpenCV this is called Bag-Of-Words (like BOWTrainer
, BOWImgDescriptorExtractor etc). Have a look at the documentation for using this.
There is some example code in samples/cpp/bagofwords_classification.cpp
The benefits will be that you get more robust results (depending on the implementation of what you are doing now), and that the matching is generally faster.
I understand that you want to remove points from the input that lead to false positives in your matching.
You can't remove points from FLANN(1, 2, 3). FLANN builds a tree for fast search. Depending on the type of tree, removing a node becomes impossible. Guess what, FLANN uses a KD-tree which doesn't (easily) allow removal of points.
FlannBasedMatcher does not support masking permissible matches of descriptor sets because flann::Index does not support this.
I would suggest using a radius search instead of a plain search. Alternatively look at the L2-distance of the found matches and in your code write a function to see whether the distance falls below a threshold.
I should also note that you can rebuild your flann-tree. Obviously, there is a performance penalty when doing this. But if you have a large number of queries and some features coming up as false-positives way too often, it might make sense to do this once.
You need the functions DescriptorMatcher::clear()
and then DescriptorMatcher::add(const vector<Mat>& descriptors)
for this. Referenz.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With