Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the theory behind computing variance of an image?

I am trying to compute the blurriness of an image by using LaplacianFilter.

According to this article: https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/ I have to compute the variance of the output image. The problem is I don't understand conceptually how do I compute variance of an image.

Every pixel has 4 values for every color channel, therefore I can compute the variance of every channel, but then I get 4 values, or even 16 by computing variance-covariance matrix, but according to the OpenCV example, they have only 1 number.

After computing that number, they just play with the threshold in order to make a binary decision, whether the image is blurry or not.

PS. by no means I am an expert on this topic, therefore my statements can make no sense. If so, please be nice to edit the question.

like image 817
denis631 Avatar asked Jan 18 '18 11:01

denis631


People also ask

How is variance calculated in image processing?

These are indeed the correct way to calculate the mean and variance over all the pixels of your image. It is not impossible that your variance is larger than the mean as both are defined in the following way: mean = sum(x)/length(x) variance = sum((x - mean(x)). ^2)/(length(x) - 1);

What is the variance of an image?

(A variance image is an image of the variances, that is the squares of the standard deviations, in the values of the input or output images.)

What does mean and variance signify in an image?

'mean' value gives the contribution of individual pixel intensity for the entire image & variance is normally used to find how each pixel varies from the neighbouring pixel (or centre pixel) and is used in classify into different regions.

What is standard deviation of an image?

The standard deviation (Σ) provides a measure of the dispersion of image gray level intensities and can be understood as a measure of the power level of the alternating signal component acquired by the camera.


1 Answers

First thing first, if you see the tutorial you gave, they convert the image to a greyscale, thus it will have only 1 channel and 1 variance. You can do it for each channel and try to compute a more complicated formula with it, or just use the variance over all the numbers... However I think the author also converts it to greyscale since it is a nice way of fusing the information and in one of the papers that the author supplies actually says that

A well focused image is expected to have a high variation in grey levels.

The author of the tutorial actually explains it in a simple way. First, think what the laplacian filter does. It will show the well define edges here is an example using the grid of pictures he had. (click on it to see better the details)

enter image description here

As you can see the blurry images barely have any edges, while the focused ones have a lot of responses. Now, what would happen if you calculate the variance. let's imagine the case where white is 255 and black is 0. If everything is black... then the variance is low (cases of the blurry ones), but if they have like half and half then the variance is high.

However, as the author already said, this threshold is dependent on the domain, if you take a picture of a sky even if it is focus it may have low variance, since it is quite similar and does not have very well define edges...

I hope this answer your doubts :)

like image 59
api55 Avatar answered Sep 25 '22 12:09

api55