I am curious about how the OpenCV
Feature descriptors are compared. For instance, I can use cvExtractSURF()
to get a list of features and their 64-bit (or 128-bit) descriptors, where can I find out how two descriptor can be compared?
In stepping through some sample code, to me it looks like two of my "matched" features have very different descriptors (at least by numerical values).
Has anyone ever figures out how to take two descriptor arrays and compare them?
Googling hasn't helped too much...
Cheers, Brett
SURF is better than SIFT in rotation invariant, blur and warp transform. SIFT is better than SURF in different scale images. SURF is 3 times faster than SIFT because using of integral image and box filter.
We showed that ORB is the fastest algorithm while SIFT performs the best in the most scenarios. For special case when the angle of rotation is proportional to 90 degrees, ORB and SURF outperforms SIFT and in the noisy images, ORB and SIFT show almost similar performances.
SIFT is an algorithm used to extract the features from the images. SURF is an efficient algorithm is same as SIFT performance and reduced in computational complexity. SIFT algorithm presents its ability in most of the situation but still its performance is slow.
You might want to look at the paper Local invariant feature detectors: a survey. It's a great paper with a description of widely used feature detectors, including SURF.
In the OpenCV 2.1 sample file find_obj.cpp, two methods are presented:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With