Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Background subtraction in OpenCV(C++)

I want to implement a background averaging method. I have 50 frames of images taken in one second and some of the frames contain lightning which I want to extract as the foreground. The frames are taken with a stationary camera and the frames are taken as grayscales. What I want to do is:

  1. Get the background model
  2. After, compare each frame to the background model to determine whether there is lighting in that frame or not.

I read some documents on how this can possible be done by using cvAcc() but am having a difficulty understanding how this can be done. I would appreciate a piece of code which guide me and links to documents that can help me understand how I can implement this.

Thanking you in advance.

like image 962
user854576 Avatar asked Oct 14 '11 09:10

user854576


People also ask

What is background subtraction OpenCV?

Compatibility. OpenCV >= 3.0. Background subtraction (BS) is a common and widely used technique for generating a foreground mask (namely, a binary image containing the pixels belonging to moving objects in the scene) by using static cameras.

What is background subtraction method?

The background subtraction method (BSM) is one of the most popular approaches to detecting objects. This algorithm works by comparing moving parts of a video to a background image and foreground image.

What is MOG background subtraction?

MOG uses a method to model each background pixel by a mixture of K Gaussian distributions (K is 3 to 5). The weights of the mixture represent the amount of time that those colors were present in the scene. The probable background colors are those which remain longer and more static.

How do I subtract two images in OpenCV?

You can subtract two images by OpenCV function, cv. subtract(). res = img1 - img2. Both images should be of same depth and type.


1 Answers

We had the same task in one of our projects.

To get the background model, we simply create a class BackgroundModel, capture the first (lets say) 50 frames and calculate the average frame to avoid pixel errors in the background model.

For example, if you get an 8-bit greyscale image (CV_8UC1) from your camera, you initialize your model with CV_16UC1 to avoid clipping.

cv::Mat model = cv::Mat(HEIGHT, WIDTH, CV_16UC1, cv::Scalar(0));

Now, waiting for the first frames to calculate your model, just add every frame to the model and count the amount of received frames.

void addFrame(cv::Mat frame) {
    cv::Mat convertedFrame;
    frame.convertTo(convertedFrame, CV_16UC1);
    cv::add(convertedFrame, model, model);
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        createMask();
    }
}

The createMask() function calculates the average frame which we use for the model.

void createMask() {
    cv::convertScaleAbs(model, mask, 1.0 / learnedFrames);
    mask.convertTo(mask, CV_8UC1);
}

Now, you just send all the frames the way through the BackgroundModel class to a function subtract(). If the result is an empty cv::Mat, the mask is still calculated. Otherwise, you get a subtracted frame.

cv::Mat subtract(cv::Mat frame) {
    cv::Mat result;
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        cv::subtract(frame, mask, result);
    }
    else {
        addFrame(frame);
    }
    return result;
}

Last but not least, you can use Scalar sum(const Mat& mtx) to calculate the pixel sum and decide if it's a frame with lights on it.

like image 102
ping Avatar answered Sep 22 '22 10:09

ping