Does anyone happen to know why the OpenCV 2 DescriptorMatcher::radiusMatch()
and knnMatch()
take a vector<vector<DMatch>>& matches
? I'm a bit confused about why it wouldn't just a vector, since it's just a single array of points in the scene that correspond to the training image, right?
I've got something like this:
void getMatchingPoints(
const vector<vector<cv::DMatch> >& matches,
const vector<cv::KeyPoint>& keyPtsTemplates,
const vector<cv::KeyPoint>& keyPtsScene,
vector<Vec2f>& ptsTemplate,
vector<Vec2f>& ptsScene
)
{
ptsTemplate.clear();
ptsScene.clear();
for (size_t k = 0; k < matches.size(); k++)
{
for (size_t i = 0; i < matches[k].size(); i++)
{
const cv::DMatch& match = matches[k][i];
ptsScene.push_back(fromOcv(keyPtsScene[match.queryIdx].pt));
ptsTemplate.push_back(fromOcv(keyPtsTemplates[match.trainIdx].pt));
}
}
}
but I'm a bit confused about how to actually map the approx. location of the object once I have them all in ptsScene
. The points seem scattered to me when I just draw them, so I think I'm missing what the nested vectors represent.
The knnMatch
function will return the k
nearest-neighbour matches, i.e. if you call knnMatch(queryDescriptors, trainDescriptors, matchesQueryToTrain, 3)
where in this case k=3
, then for each training point, it will find the 3 best matches from the query set.
In terms of your vector<vector<DMatch>>
, this means that the outer vector
is a vector of each query->train match, and the inner vector
is a vector of your k
nearest matches.
There is quite a good example of how to use these k
matches along with a cross-checking method in this other question.
If you want a simple 1-1 matching, then you can call knnMatch with k=1 which will return an inner vector of size 1, or just call match
which outputs matches in the form vector<DMatch>
with no second vector.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With