Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to apply RANSAC on SURF, SIFT and ORB matching results

I'm working on image processing. I want to match 2D Features and I did many tests on SURF, SIFT, ORB.
How can I apply RANSAC on SURF/SIFT/ORB in OpenCV?

like image 906
Haider Avatar asked Apr 14 '13 18:04

Haider


2 Answers

OpenCV has the function cv::findHomography which can optionally use RANSAC to find the homography matrix relating two images. You can see an example of this function in action here.

Specifically the section of code you are interested in is:

FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors_object, descriptors_scene, matches );

for( int i = 0; i < good_matches.size(); i++ )
{
    //-- Get the keypoints from the good matches
    obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
    scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
}

Mat H = findHomography( obj, scene, CV_RANSAC ); 

You can then use the function cv::perspectiveTransform to warp the images according to the homography matrix.

Other options for cv::findHomography other than CV_RANSAC are 0 which uses every point and CV_LMEDS which uses the Least-Median method. More info can be found in the OpenCV camera calibration documentation here.

like image 148
Danny Avatar answered Sep 19 '22 23:09

Danny


Here is the python implementation of applying ransac using skimage either with ProjectiveTransform or AffineTransform (i.e. Homography) model on obtained SIFT / SURF keypoints. This implementation first does Lowe's ratio test on obtained keypoints then it does ransac on filtered keypoints from Lowe's ratio test.

import cv2
from skimage.measure import ransac
from skimage.transform import ProjectiveTransform, AffineTransform
import numpy as np


def siftMatching(img1, img2):
    # Input : image1 and image2 in opencv format
    # Output : corresponding keypoints for source and target images
    # Output Format : Numpy matrix of shape: [No. of Correspondences X 2] 

    surf = cv2.xfeatures2d.SURF_create(100)
    # surf = cv2.xfeatures2d.SIFT_create()

    kp1, des1 = surf.detectAndCompute(img1, None)
    kp2, des2 = surf.detectAndCompute(img2, None)

    FLANN_INDEX_KDTREE = 0
    index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
    search_params = dict(checks = 50)
    flann = cv2.FlannBasedMatcher(index_params, search_params)
    matches = flann.knnMatch(des1,des2,k=2)

    # Lowe's Ratio test
    good = []
    for m, n in matches:
        if m.distance < 0.7*n.distance:
            good.append(m)

    src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1, 2)
    dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1, 2)

    # Ransac
    model, inliers = ransac(
            (src_pts, dst_pts),
            AffineTransform, min_samples=4,
            residual_threshold=8, max_trials=10000
        )

    n_inliers = np.sum(inliers)

    inlier_keypoints_left = [cv2.KeyPoint(point[0], point[1], 1) for point in src_pts[inliers]]
    inlier_keypoints_right = [cv2.KeyPoint(point[0], point[1], 1) for point in dst_pts[inliers]]
    placeholder_matches = [cv2.DMatch(idx, idx, 1) for idx in range(n_inliers)]
    image3 = cv2.drawMatches(img1, inlier_keypoints_left, img2, inlier_keypoints_right, placeholder_matches, None)

    cv2.imshow('Matches', image3)
    cv2.waitKey(0)

    src_pts = np.float32([ inlier_keypoints_left[m.queryIdx].pt for m in placeholder_matches ]).reshape(-1, 2)
    dst_pts = np.float32([ inlier_keypoints_right[m.trainIdx].pt for m in placeholder_matches ]).reshape(-1, 2)

    return src_pts, dst_pts
like image 31
warriorUSP Avatar answered Sep 17 '22 23:09

warriorUSP