Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to calculate % score from ORB algorithm?

I am using the ORB algorithm of OpenCV 2.4.9 with Python to compare images. The ORB algorithm does not return the similarity score as a percentage. Is there any way to do this?

My code to compare images using ORB is as follows

img1 = cv2.imread("img11.jpg",0) 
img2 = cv2.imread("img2.jpg",0)
# Initiate ORB detector
orb = cv2.ORB()

# find the keypoints and descriptors with ORB
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

# create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING)

matches = bf.knnMatch(des1, trainDescriptors = des2, k = 2)

good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])
if len(good) > 20:
   print "similar image"

I did find a solution on Stack Overflow to do this for sift algorithm using Matlab but is there any external library out there that can be easily used with Python to do that with OpenCV?

like image 340
user93 Avatar asked Sep 16 '16 09:09

user93


People also ask

How are key points calculated?

A keypoint is calculated by considering an area of certain pixel intensities around it. Keypoints are calculated using various different algorithms, ORB(Oriented FAST and Rotated BRIEF) technique uses the FAST algorithm to calculate the keypoints. FAST stands for Features from Accelerated Segments Test.

How does the orb algorithm work?

ORB uses BRIEF descriptors but as the BRIEF performs poorly with rotation. So what ORB does is to rotate the BRIEF according to the orientation of keypoints. Using the orientation of the patch, its rotation matrix is found and rotates the BRIEF to get the rotated version.

What is Orb in cv2?

ORB is basically a fusion of FAST keypoint detector and BRIEF descriptor with many modifications to enhance the performance. First it use FAST to find keypoints, then apply Harris corner measure to find top N points among them. It also use pyramid to produce multiscale-features.

What are Keypoints and descriptors?

Key-points should simply be points (x,y), imo. What describes a point and basically the region around it should be called a descriptor. Some keypoints mix those terms and they become points with an attached description vector, just like @rayryeng explained.


1 Answers

I don't think keypoint matching lends itself to a percentage score, regardless of whether you use ORB or SIFT.

I think OP was referring to this post which does give hints on how to arrive at a score for each match. The score is the square of the distance of each item in the match pair i.e.

m.distance**2 + n.distance**2

where m and n are from the OP posted code. However, this score bears no resemblance to a percentage. And I'm not sure you're going to find one. The magic number of 0.75 in the OP code is known in some places as the Lowe ratio which was first proposed by Lowe in [D G Lowe, "Distinctive Image Features from Scale-Invariant Keypoints", Intl Journal of Computer Vision 60(2), 91-110, 2004]. It's as good a figure of merit as any, but needs to be adjusted according to the keypoint detection algorithm ( e.g. ORB, SIFT, etc). To determine whether you've found a good match, it's common to tweak the Lowe ratio and then count the number of good matches. The Homography tutorial (for OpenCV 2.4 or 3.4.1) is a good example of this

I'm using OpenCV 3.4 and ORB does return values, just not as many as SIFT. Using the tutorial images "box.png" and "box_in_scene.png", I get 79 "good" matches with SIFT and 7(!) "good" matches with ORB.

However, if I crank up the magic number 0.75 to 0.89 for ORB, I get 79 "good" matches.

Full code using Python 3.4.4 and OpenCV 3.4. Syntax and operation should be very similar for OpenCV 2.4.9:

# This time, we will use BFMatcher.knnMatch() to get k best matches. 
# In this example, we will take k=2 so that we can apply ratio test 
# explained by D.Lowe in his paper. 

import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
img1 = cv.imread('box.png',0)          # queryImage
img2 = cv.imread('box_in_scene.png',0) # trainImage

method = 'ORB'  # 'SIFT'
lowe_ratio = 0.89

if method   == 'ORB':
    finder = cv.ORB_create()
elif method == 'SIFT':
    finder = cv.xfeatures2d.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = finder.detectAndCompute(img1,None)
kp2, des2 = finder.detectAndCompute(img2,None)

# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []

for m,n in matches:
    if m.distance < lowe_ratio*n.distance:
        good.append([m])

msg1 = 'using %s with lowe_ratio %.2f' % (method, lowe_ratio)
msg2 = 'there are %d good matches' % (len(good))

img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good, None, flags=2)

font = cv.FONT_HERSHEY_SIMPLEX
cv.putText(img3,msg1,(10, 250), font, 0.5,(255,255,255),1,cv.LINE_AA)
cv.putText(img3,msg2,(10, 270), font, 0.5,(255,255,255),1,cv.LINE_AA)
fname = 'output_%s_%.2f.png' % (method, magic_number)
cv.imwrite(fname, img3)

plt.imshow(img3),plt.show()

Using these images for input:

enter image description here enter image description here

I get these results: enter image description here enter image description here

However, it's worth noting that ORB gives many more bogus matches that are off the Bastoncini box.

like image 152
bfris Avatar answered Sep 21 '22 10:09

bfris