Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Matching image and determine best match using SURF

I have been trying to use the EMGU Example SURFFeature to determine if an image is in a collection of images. But I am having problems understanding how to determine if a match was found.

.........Original image ..............................Scene_1 (match).........................Scene_2 (no match)

enter image description here................... enter image description here................... enter image description here

I have been looking at the documentation and spent hours looking for a possible solution, on how to determine if the images are the same. As you can see in the following pics, a match is found for both.

enter image description hereenter image description here

Its clear that the one I'm trying to find gets more matches (lines connecting) but how do I check this in the code?

Question: How do I filter out the good match?

My goal is to be able to compare an input Image (capture from webcam) with a collection of images in a database. but before I can save all images to the DB I need to know what Values I can compare the input to. (e.g. save the objectKeypoints in the DB)

Here is my sample code (the matching part):

private void match_test()
{
    long matchTime;
    using (Mat modelImage = CvInvoke.Imread(@"images\input.jpg", LoadImageType.Grayscale))
    using (Mat observedImage = CvInvoke.Imread(@"images\2.jpg", LoadImageType.Grayscale))
    {
        Mat result = DrawMatches.Draw(modelImage, observedImage, out matchTime);
        //ImageViewer.Show(result, String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime));
        ib_output.Image = result;
        label7.Text = String.Format("Matched using {0} in {1} milliseconds", CudaInvoke.HasCuda ? "GPU" : "CPU", matchTime);
     }
}

public static void FindMatch(Mat modelImage, Mat observedImage, out long matchTime, out VectorOfKeyPoint modelKeyPoints, out VectorOfKeyPoint observedKeyPoints, VectorOfVectorOfDMatch matches, out Mat mask, out Mat homography)
{
    int k = 2;
    double uniquenessThreshold = 0.9;
    double hessianThresh = 800;

    Stopwatch watch;
    homography = null;

    modelKeyPoints = new VectorOfKeyPoint();
    observedKeyPoints = new VectorOfKeyPoint();

    using (UMat uModelImage = modelImage.ToUMat(AccessType.Read))
    using (UMat uObservedImage = observedImage.ToUMat(AccessType.Read))
    {
        SURF surfCPU = new SURF(hessianThresh);
        //extract features from the object image
        UMat modelDescriptors = new UMat();
        surfCPU.DetectAndCompute(uModelImage, null, modelKeyPoints, modelDescriptors, false);

        watch = Stopwatch.StartNew();

        // extract features from the observed image
        UMat observedDescriptors = new UMat();
        surfCPU.DetectAndCompute(uObservedImage, null, observedKeyPoints, observedDescriptors, false);

        //Match the two SURF descriptors
        BFMatcher matcher = new BFMatcher(DistanceType.L2);
        matcher.Add(modelDescriptors);

        matcher.KnnMatch(observedDescriptors, matches, k, null);

        mask = new Mat(matches.Size, 1, DepthType.Cv8U, 1);
        mask.SetTo(new MCvScalar(255));

        Features2DToolbox.VoteForUniqueness(matches, uniquenessThreshold, mask);
        int nonZeroCount = CvInvoke.CountNonZero(mask);

        if (nonZeroCount >= 4)
        {
            nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints,
               matches, mask, 1.5, 20);

            if (nonZeroCount >= 4)
                homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints,
                   observedKeyPoints, matches, mask, 2);
        }

        watch.Stop();
    }

    matchTime = watch.ElapsedMilliseconds;
}

I really have the feeling I'm not far from the solution.. hope someone can help me out

like image 259
Hagbart Celine Avatar asked Mar 15 '16 08:03

Hagbart Celine


People also ask

What is the difference between the image matching algorithm and surf?

The image matching algorithm shows a better performance for image rotation than the standard SURF and it succeeds in matching the image including repetitive patterns which will deteriorate the distinctiveness of feature descriptors.

Which image matching techniques work best against transformation and deformations?

In this paper, we compare the performance of three different image matching techniques, i.e., SIFT, SURF, and ORB, against different kinds of transformations and deformations such as scaling, rotation, noise, fish eye distortion, and shearing.

What does surf stand for?

Index Terms-Image matching, scale invariant feature transform (SIFT), speed up robust feature (SURF), robust independent elementary features (BRIEF), oriented FAST, rotated BRIEF (ORB). . Results of comparing the image with its scaled image.

Does the SIFT matching algorithm work against image distortions?

In this paper, the performance of the SIFT matching algorithm against various image distortions such as rotation, scaling, fish eye and motion distortion are evaluated and false and true positive rates for a large number of image pairs are calculated and presented.


1 Answers

On exit from Features2DToolbox.GetHomographyMatrixFromMatchedFeatures, the mask matrix is updated to have zeros where matches are outliers (i.e., don't correspond well under the computed homography). Therefore, calling CountNonZero again on mask should give an indication of match quality.

I see you're wanting to classify matches as "good" or "bad" rather than just compare multiple matches against a single image; from the examples in your question it looks like maybe a reasonable threshold would be 1/4 the keypoints found in the input image. You might want an absolute minimum as well, on the grounds that you can't really consider something a good match without a certain quantity of evidence. So, e.g., something like

bool FindMatch(...) {
    bool goodMatch = false;
    // ...
    homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(...);
    int nInliers = CvInvoke.CountNonZero(mask);
    goodMatch = nInliers >= 10 && nInliers >= observedKeyPoints.size()/4;
    // ...
    return goodMatch;
}

where on branches that don't get as far as computing homography of course goodMatch just stays false as it was initialized. The numbers 10 and 1/4 are kinda arbitrary and will depend on your application.

(Warning: the above is entirely derived from reading the docs; I haven't actually tried it.)

like image 107
Gareth McCaughan Avatar answered Sep 19 '22 12:09

Gareth McCaughan