Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Blob tracking algorithm

I'm trying to create simple blob tracking using OpenCV. I have detected the blobs using findcontours. I would like to give those blobs a constant ID.

I have collected a list of blobs in the previous frame and the current frame. Then I took the distance between each blob in the previous frame and the current frame. I would like to know what else is needed to track the blobs and give them an ID. I just took the distance between previous and current frame blobs, but how can I assign the blobs a consistent ID using the measured distance between the blobs?

like image 611
Moaz ELdeen Avatar asked Sep 08 '12 12:09

Moaz ELdeen


People also ask

What is blob detection algorithm?

In computer vision, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions.

What is a blob in Opencv?

Blob stands for Binary Large Object and refers to the connected pixel in the binary image. The term "Large" focuses on the object of a specific size, and that other "small" binary objects are usually noise.

What is blob images?

A binary large object (BLOB or blob) is a collection of binary data stored as a single entity. Blobs are typically images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob.

What is blob detection Matlab?

Blob analysis is a computer vision framework for detection and analysis of connected pixels called blobs. This algorithm can be challenging to implement in a streaming design because it usually involves two or more passes through the image.


3 Answers

In the first frame, you can assign id any way, 1 for the first you find, 2 for the second... or simply give them ID according to their position in the collection.

Then on next frame you will have to use best match. Find the blobs, compute all distances between current blobs and all the blobs of the previous image and assign each previous ID to the closest blob. Blobs that just enter the field will get new IDs.

Now you have two frames, you can do movement prediction for the next one. Just compute deltaX and deltaY between previous and current position of the blob. You can use this information to guess future position. Match against this future position.

This should work if you have not to many overlapping blobs, and if movement is not too fast and erratic between each frames.

It's possible to be more accurate using a scoring system trough several images:
Get positions for the first 3 or 5 images. For any blob of frame one, seek the closest on frame 2, compute the speed (deltaX deltaY), seek the closest to predicted position for frame 3, 4, 5... Sum up all distances between predicted positon and closest blob it will be the score. Do the same using the 2nd closest on frame 2 (it will seek in another direction). The lower the score the most likely its the good blob.

If you have lot of blobs, you should use a quadtree to speedup process. Compare squared distance; it will avoid lot of sqrt computations.

It's important to know how your blob tipically move to tune your algotrithm.

like image 93
bokan Avatar answered Oct 16 '22 10:10

bokan


Here's a OpenCV code sample of blob tracking:

#include "stdafx.h"

#include <opencv2\opencv.hpp>

IplImage* GetThresholdedImage(IplImage* img)
{
    // Convert the image into an HSV image
    IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);
    cvCvtColor(img, imgHSV, CV_BGR2HSV);

    IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);

    // Values 20,100,100 to 30,255,255 working perfect for yellow at around 6pm
    cvInRangeS(imgHSV, cvScalar(112, 100, 100), cvScalar(124, 255, 255), imgThreshed);

    cvReleaseImage(&imgHSV);

    return imgThreshed;
}

int main()
{
    // Initialize capturing live feed from the camera
    CvCapture* capture = 0;
    capture = cvCaptureFromCAM(0);  

    // Couldn't get a device? Throw an error and quit
    if(!capture)
    {
        printf("Could not initialize capturing...\n");
        return -1;
    }

    // The two windows we'll be using
    cvNamedWindow("video");
    cvNamedWindow("thresh");

    // This image holds the "scribble" data...
    // the tracked positions of the ball
    IplImage* imgScribble = NULL;

    // An infinite loop
    while(true)
    {
        // Will hold a frame captured from the camera
        IplImage* frame = 0;
        frame = cvQueryFrame(capture);

        // If we couldn't grab a frame... quit
        if(!frame)
            break;

        // If this is the first frame, we need to initialize it
        if(imgScribble == NULL)
        {
            imgScribble = cvCreateImage(cvGetSize(frame), 8, 3);
        }

        // Holds the yellow thresholded image (yellow = white, rest = black)
        IplImage* imgYellowThresh = GetThresholdedImage(frame);

        // Calculate the moments to estimate the position of the ball
        CvMoments *moments = (CvMoments*)malloc(sizeof(CvMoments));
        cvMoments(imgYellowThresh, moments, 1);

        // The actual moment values
        double moment10 = cvGetSpatialMoment(moments, 1, 0);
        double moment01 = cvGetSpatialMoment(moments, 0, 1);
        double area = cvGetCentralMoment(moments, 0, 0);

        // Holding the last and current ball positions
        static int posX = 0;
        static int posY = 0;

        int lastX = posX;
        int lastY = posY;

        posX = moment10/area;
        posY = moment01/area;

        // Print it out for debugging purposes
        printf("position (%d,%d)\n", posX, posY);

        // We want to draw a line only if its a valid position
        if(lastX>0 && lastY>0 && posX>0 && posY>0)
        {
            // Draw a yellow line from the previous point to the current point
            cvLine(imgScribble, cvPoint(posX, posY), cvPoint(lastX, lastY), cvScalar(0,255,255), 5);
        }

        // Add the scribbling image and the frame... and we get a combination of the two
        cvAdd(frame, imgScribble, frame);
        cvShowImage("thresh", imgYellowThresh);
        cvShowImage("video", frame);

        // Wait for a keypress
        int c = cvWaitKey(10);
        if(c!=-1)
        {
            // If pressed, break out of the loop
            break;
        }

        // Release the thresholded image... we need no memory leaks.. please
        cvReleaseImage(&imgYellowThresh);

        delete moments;
    }

    // We're done using the camera. Other applications can now use it
    cvReleaseCapture(&capture);
    return 0;
}
like image 28
Software_Designer Avatar answered Oct 16 '22 11:10

Software_Designer


u can use cvblobslib library for blob detection ...

  1. if your inter-frame blob movement is less than the inter blob distance..that is blob displacement is less than the inter blob distance then you can create a list and keep adding the blob in each current frame which fall in the neighbourhood of the blobs in the previous frame...
  2. if your blobs have some constant features like ellipticity...aspect ratio(after fitting a bounding box to it) you can group the blobs with these features into a list..
like image 25
rotating_image Avatar answered Oct 16 '22 09:10

rotating_image