I need to get the list of the x
and y
coordinates of the pixels that the feature matcher selects in the code provided. I'm using Python and OpenCV. Can anyone help me?
img1=cv2.imread('DSC_0216.jpg',0)
img2=cv2.imread('DSC_0217.jpg',0)
orb=cv2.ORB(nfeatures=100000)
kp1,des1=orb.detectAndCompute(img1,None)
kp2,des2=orb.detectAndCompute(img2,None)
img1kp=cv2.drawKeypoints(img1,kp1,color=(0,255,0),flags=0)
img2kp=cv2.drawKeypoints(img2,kp2,color=(0,255,0),flags=0)
cv2.imwrite('m_img1.jpg',img1kp)
cv2.imwrite('m_img2.jpg',img2kp)
bf=cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches=bf.match(des1,des2)
matches=sorted(matches, key= lambda x:x.distance)
OpenCV is very dynamic in which we can first find all the objects (or contours) in an image using the cv2. findContours() function. We can then find the x and y coordinates of the contour using a little bit of custom code.
In terms of coordinates, a pixel can be identified by a pair of integers giving the column number and the row number. For example, the pixel with coordinates (3,5) would lie in column number 3 and row number 5. Conventionally, columns are numbered from left to right, starting with zero.
Brute-Force matcher is simple. It takes the descriptor of one feature in first set and is matched with all other features in second set using some distance calculation. And the closest one is returned. For BF matcher, first we have to create the BFMatcher object using cv. BFMatcher().
We know that your keypoints are stored in kp1
and kp2
where they are lists of feature matches for the first and second image respectively. In the cv2.ORB
perspective, the feature descriptors are 2D matrices where each row is a keypoint that is detected in the first and second image.
In your case because you are using cv2.BFMatch
, matches
returns a list of cv2.DMatch
objects where each object contains several members and among them are two important members:
queryIdx
- The index or row of the kp1
interest point matrix that matchestrainIdx
- The index or row of the kp2
interest point matrix that matchesTherefore, queryIdx
and trainIdx
tell you which ORB features match between the first and second image. You'd use these to index into kp1
and kp2
and obtain the pt
member, which is a tuple of (x,y)
coordinates that determine the actual spatial coordinates of the matches.
All you have to do is iterate through each cv2.DMatch
object in matches
, append to a list of coordinates for both kp1
and kp2
and you're done.
Something like this:
# Initialize lists
list_kp1 = []
list_kp2 = []
# For each match...
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
# Get the coordinates
(x1, y1) = kp1[img1_idx].pt
(x2, y2) = kp2[img2_idx].pt
# Append to each list
list_kp1.append((x1, y1))
list_kp2.append((x2, y2))
Note that I could have just done list_kp1.append(kp1[img1_idx].pt)
and the same for list_kp2
, but I wanted to make it very clear on how to interpret the spatial coordinates. You could also go one step further and do a list comprehension:
list_kp1 = [kp1[mat.queryIdx].pt for mat in matches]
list_kp2 = [kp2[mat.trainIdx].pt for mat in matches]
list_kp1
will contain the spatial coordinates of a feature point that matched with the corresponding position in list_kp2
. In other words, element i
of list_kp1
contains the spatial coordinates of the feature point from img1
that matched with the corresponding feature point from img2
in list_kp2
whose spatial coordinates are in element i
.
As a minor sidenote, I used this concept when I wrote a workaround for drawMatches
because for OpenCV 2.4.x, the Python wrapper to the C++ function does not exist, so I made use of the above concept in locating the spatial coordinates of the matching features between the two images to write my own implementation of it.
Check it out if you like!
module' object has no attribute 'drawMatches' opencv python
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With