Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Measure of image similarity for feature matching?

I'm currently trying to work with a Brute Force feature matcher using SIFT in openCV, using python. I'm trying to utilise it for my image search function on my server, where I'm inputting an image and having that image be compared with others, in the hopes that the matches will indicate a level of similarity. Is there a way to indicate a level of similarity via using feature matching?

Currently, I'm playing around with what I found on this website, which I'll also post below:

img1 = cv2.imread('box.png',0)          # queryImage
img2 = cv2.imread('box_in_scene.png',0) # trainImage

# Initiate SIFT detector
sift = cv2.SIFT()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)

# BFMatcher with default params
bf = cv2.BFMatcher()
matches = bf.knnMatch(des1,des2, k=2)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])

# cv2.drawMatchesKnn expects list of lists as matches.
img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)

plt.imshow(img3),plt.show()

What I'm using at the moment to create a measure of 'similarity' is the number of 'good' matches that are acquired from applying the ratio test, and just finding the how many 'good' matches are stored in good using a simple len(good).

This returned the number of good matches that I used to valuate the similarity of the input image to that of the database. However, I'm assuming it's not as simple as this, as when i began testing this using a picture of a shoe, images such as one of a banana, received a higher amount of 'good' matches than the other images of shoes. Even so far as to be more similar than the same shoe in a different colour.

I thought this may be just an anomaly, so I continued to test with a larger dataset of images, finding that again, the shoes weren't receiving scores (or number of good matches), as high as say an image of a quad-bike or a person, rather than matching with other shoes.

So basically, how can i define the similarity of two images using feature matching with a numerical value?

Thank you.

like image 594
Questionnaire Avatar asked Apr 05 '17 01:04

Questionnaire


People also ask

How do you measure similarity between images?

The similarity of the two images is detected using the package “imagehash”. If two images are identical or almost identical, the imagehash difference will be 0. Two images are more similar if the imagehash difference is closer to 0.

Which is the best similarity measure?

Cosine similarity is a commonly used similarity measure for real-valued vectors, used in (among other fields) information retrieval to score the similarity of documents in the vector space model.

How do you evaluate similarity measures?

The similarity measure is usually expressed as a numerical value: It gets higher when the data samples are more alike. It is often expressed as a number between zero and one by conversion: zero means low similarity(the data objects are dissimilar). One means high similarity(the data objects are very similar).


1 Answers

I think you need to choose better features in order to get better(or more similar images) results. SIFT is a local feature and there is a good chance you can find similar SIFT features even with images which are semantically different(as in shoe and banana).

To improve similarity accuracy, I would suggest you to decide on better features in addition to SIFT. Like colour histogram in the image. If you use colour histogram of the image, you will get images which are similar in colour histogram. You can use a good mix of features in order to find similarity. You can decide on this mix by checking what kind of images you have in database and what you feel could be discerning features between different semantic classes.

If you are fine to use a slightly different method, I would like to suggest PLSA, it is a method which I have used. Probabilistic latent semantic analysis(PLSA) is an unsupervised learning algorithm that represents data with a lower dimension hidden class. Similarity can be then found just by computing euclidean distance of the new images low dimensional representation with all the other classes. You can sort it based on the distance and get similar images. Even here choosing right features is important. Also you will need to choose number of hidden classes. You will need to experiment with number of classes.

I have one of my small projects which uses PLSA to solve image retrieval. So if you don't mind this plug, here it is PLSA Image retrieval. Unfortunately it is Matlab, but you can understand what is happening and try use it. I have used colour histogram as feature. So choose features which will help you discern different classes better.

like image 182
harshkn Avatar answered Sep 23 '22 14:09

harshkn