In my opencv project, I want to detect copy-move forgery in an image. I know how to use the opencv FLANN for feature matching in 2 different image, but I am become so confused on how to use FLANN for detection copy-move forgery in an image.
P.S1: I get the sift keypoints and descriptors of image and stuck in using the feature matching class.
P.S2: the type of feature matching is not important for me.
Thanks in advance.
Update :
These pictures is an example of what I need
And There is a code which matches features of two images and do something like it on two images (not a single one), the code in android native opencv format is like below:
vector<KeyPoint> keypoints;
Mat descriptors;
// Create a SIFT keypoint detector.
SiftFeatureDetector detector;
detector.detect(image_gray, keypoints);
LOGI("Detected %d Keypoints ...", (int) keypoints.size());
// Compute feature description.
detector.compute(image, keypoints, descriptors);
LOGI("Compute Feature ...");
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors, descriptors, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
//-- small)
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.02) )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( image, keypoints, image, keypoints,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
// imshow( "Good Matches", img_matches );
imwrite(imgOutFile, img_matches);
I don't know if it's a good idea to use keypoints for this problem. I'd rather test template matching (using a sliding window on your image as patch). Compared to keypoints, this method has the disadvantage of being sensible to rotation and scale.
If you want to use keypoints, you can :
knnMatch
function of the Brute Force Matcher (cv::BFMatcher
),keep matches between distincts points, i.e. points whose distance is greater than zero (or a threshold).
int nknn = 10; // max number of matches for each keypoint
double minDist = 0.5; // distance threshold
// Match each keypoint with every other keypoints
cv::BFMatcher matcher(cv::NORM_L2, false);
std::vector< std::vector< cv::DMatch > > matches;
matcher.knnMatch(descriptors, descriptors, matches, nknn);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// Compute distance and store distant matches
std::vector< cv::DMatch > good_matches;
for (int i = 0; i < matches.size(); i++)
{
for (int j = 0; j < matches[i].size(); j++)
{
// The METRIC distance
if( matches[i][j].distance> max(2*min_dist, 0.02) )
continue;
// The PIXELIC distance
Point2f pt1 = keypoints[queryIdx].pt;
Point2f pt2 = keypoints[trainIdx].pt;
double dist = cv::norm(pt1 - pt2);
if (dist > minDist)
good_matches.push_back(matches[i][j]);
}
}
Mat img_matches;
drawMatches(image_gray, keypoints, image_gray, keypoints, good_matches, img_matches);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With