Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Calculating sharpness of an image

I found on the internet that laplacian method is quite good technique to compute the sharpness of a image. I was trying to implement it in opencv 2.4.10. How can I get the sharpness measure after applying the Laplacian function? Below is the code:

Mat src_gray, dst;
int kernel_size = 3;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;

GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );

/// Convert the image to grayscale
cvtColor( src, src_gray, CV_RGB2GRAY );

/// Apply Laplace function
Mat abs_dst;

Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );

//compute sharpness
??

Can someone please guide me on this?

like image 832
aries Avatar asked Feb 25 '15 10:02

aries


People also ask

Is there a measurement for sharpness?

Sharpness is determined by the edge radius at its apex. The chart uses three units of measurement: Micron – edge apex thickness in microns: 1 micron (µ) = 0.001 millimetre (mm), or 1000 nanometres (nm), or 10,000 angstroms. REST – an acronym for CATRA “Razor Edge Sharpness Tester”, push-cutting force in Newton.

How do you calculate MTF?

The MTF, or Modulation Transfer Function, is defined as the ratio of the image contrast to the target contrast, expressed as a function of spatial frequency. That is, MTF(u) = C'(u) / C(u). C is the contrast in the target, C' is the corresponding contrast in the image.

What is the sharpness of an image called?

In photography, the term "acutance" describes a subjective perception of sharpness that is related to the edge contrast of an image. Acutance is related to the amplitude of the derivative of brightness with respect to space.

What is the difference between MTF and SFR?

The term SFR is currently recommended when describing resolution measurements, even though the term MTF (modulation transfer function) is still very common. An MTF is an SFR, but not all SFR's are an MTF. Per definition, an SFR is only an MTF if modulation is involved and the modulation uses harmonic functions.


1 Answers

Not exactly the answer, but I got a formula using an intuitive approach that worked on the wild.

I'm currently working in a script to detect multiple faces in a picture with a crowd, using mtcnn , which it worked very well, however it also detected many faces so blurry that you couldn't say it was properly a face.

Example image:

Original image

Faces detected:

red squares for detected faces

Matrix of detected faces:

11x11 matrix faces

mtcnn detected about 123 faces, however many of them had little resemblance as a face. In fact, many faces look more like a stain than anything else...

So I was looking a way of 'filtering' those blurry faces. I tried the Laplacian filter and FFT way of filtering I found on this answer , however I had inconsistent results and poor filtering results.

I turned my research in computer vision topics, and finally tried to implement an 'intuitive' way of filtering using the following principle:

When more blurry is an image, less 'edges' we have

If we compare a crisp image with a blurred version of the same image, the results tends to 'soften' any edges or adjacent contrasting regions. Based on that principle, I was finding a way of weighting edges and then a simple way of 'measuring' the results to get a confidence value.

I took advantage of Canny detection in OpenCV and then apply a mean value of the result (Python):

def getBlurValue(image):
    canny = cv2.Canny(image, 50,250)
    return np.mean(canny)

Canny return 2x2 array same image size . I selected threshold 50,250 but it can be changed depending of your image and scenario.

Then I got the average value of the canny result, (definitively a formula to be improved if you know what you're doing).

When an image is blurred the result will get a value tending to zero, while crisp image tend to be a positive value, higher when crisper is the image.

This value depend on the images and threshold, so it is not a universal solution for every scenario, however a best value can be achieved normalizing the result and averaging all the faces (I need more work on that subject).

In the example, the values are in the range 0-27.

I averaged all faces and I got about a 3.7 value of blur

If I filter images above 3.7:

most blurred faces are filtered

So I kept with mosth crisp faces:

enter image description here

That consistently gave me better results than the other tests.

Ok, you got me. This is a tricky way of detecting a blurriness values inside the same image space. But I hope people can take advantage of this findings and apply what I learned in its own projects.

like image 57
Vektorsoft Avatar answered Sep 21 '22 15:09

Vektorsoft