You use a Gaussian smoothing filter and subtract the smoothed version from the original image (in a weighted way so the values of a constant area remain constant). cv::GaussianBlur(frame, image, cv::Size(0, 0), 3); cv::addWeighted(frame, 1.5, image, -0.5, 0, image);
Try first using cv2. resize and standard interpolation algorithms (and time how long the resizing takes). Then, run the same operation, but instead swap in OpenCV's super resolution module (and again, time how long the resizing takes).
One general procedure is laid out in the Wikipedia article on unsharp masking:
You use a Gaussian smoothing filter and subtract the smoothed version from the original image (in a weighted way so the values of a constant area remain constant).
To get a sharpened version of frame
into image
: (both cv::Mat
)
cv::GaussianBlur(frame, image, cv::Size(0, 0), 3);
cv::addWeighted(frame, 1.5, image, -0.5, 0, image);
The parameters there are something you need to adjust for yourself.
There's also Laplacian sharpening, you should find something on that when you google.
You can try a simple kernel and the filter2D function, e.g. in Python:
kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
im = cv2.filter2D(im, -1, kernel)
Wikipedia has a good overview of kernels with some more examples here - https://en.wikipedia.org/wiki/Kernel_(image_processing)
In image processing, a kernel, convolution matrix, or mask is a small matrix. It is used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between a kernel and an image.
You can sharpen an image using an unsharp mask. You can find more information about unsharp masking here. And here's a Python implementation using OpenCV:
import cv2 as cv
import numpy as np
def unsharp_mask(image, kernel_size=(5, 5), sigma=1.0, amount=1.0, threshold=0):
"""Return a sharpened version of the image, using an unsharp mask."""
blurred = cv.GaussianBlur(image, kernel_size, sigma)
sharpened = float(amount + 1) * image - float(amount) * blurred
sharpened = np.maximum(sharpened, np.zeros(sharpened.shape))
sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape))
sharpened = sharpened.round().astype(np.uint8)
if threshold > 0:
low_contrast_mask = np.absolute(image - blurred) < threshold
np.copyto(sharpened, image, where=low_contrast_mask)
return sharpened
def example():
image = cv.imread('my-image.jpg')
sharpened_image = unsharp_mask(image)
cv.imwrite('my-sharpened-image.jpg', sharpened_image)
You can find a sample code about sharpening image using "unsharp mask" algorithm at OpenCV Documentation.
Changing values of sigma
,threshold
,amount
will give different results.
// sharpen image using "unsharp mask" algorithm
Mat blurred; double sigma = 1, threshold = 5, amount = 1;
GaussianBlur(img, blurred, Size(), sigma, sigma);
Mat lowContrastMask = abs(img - blurred) < threshold;
Mat sharpened = img*(1+amount) + blurred*(-amount);
img.copyTo(sharpened, lowContrastMask);
Any Image is a collection of signals of various frequencies. The higher frequencies control the edges and the lower frequencies control the image content. Edges are formed when there is a sharp transition from one pixel value to the other pixel value like 0 and 255 in adjacent cell. Obviously there is a sharp change and hence the edge and high frequency. For sharpening an image these transitions can be enhanced further.
One way is to convolve a self made filter kernel with the image.
import cv2
import numpy as np
image = cv2.imread('images/input.jpg')
kernel = np.array([[-1,-1,-1],
[-1, 9,-1],
[-1,-1,-1]])
sharpened = cv2.filter2D(image, -1, kernel) # applying the sharpening kernel to the input image & displaying it.
cv2.imshow('Image Sharpening', sharpened)
cv2.waitKey(0)
cv2.destroyAllWindows()
There is another method of subtracting a blurred version of image from bright version of it. This helps sharpening the image. But should be done with caution as we are just increasing the pixel values. Imagine a grayscale pixel value 190, which if multiplied by a weight of 2 makes if 380, but is trimmed of at 255 due to the maximum allowable pixel range. This is information loss and leads to washed out image.
addWeighted(frame, 1.5, image, -0.5, 0, image);
For clarity in this topic, a few points really should be made:
Sharpening images is an ill-posed problem. In other words, blurring is a lossy operation, and going back from it is in general not possible.
To sharpen single images, you need to somehow add constraints (assumptions) on what kind of image it is you want, and how it has become blurred. This is the area of natural image statistics. Approaches to do sharpening hold these statistics explicitly or implicitly in their algorithms (deep learning being the most implicitly coded ones). The common approach of up-weighting some of the levels of a DOG or Laplacian pyramid decomposition, which is the generalization of Brian Burns answer, assumes that a Gaussian blurring corrupted the image, and how the weighting is done is connected to assumptions on what was in the image to begin with.
Other sources of information can render the problem sharpening well-posed. Common such sources of information is video of a moving object, or multi-view setting. Sharpening in that setting is usually called super-resolution (which is a very bad name for it, but it has stuck in academic circles). There has been super-resolution methods in OpenCV since a long time.... although they usually dont work that well for real problems last I checked them out. I expect deep learning has produced some wonderful results here as well. Maybe someone will post in remarks on whats worthwhile out there.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With