Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python Image Processing - How to remove certain contour and blend the value with surrounding pixels?

I'm doing a project with depth image. But I have problems with noise and failed pixel reading with my depth camera. There are some spots and contours (especially edges) that have zero value. How to just ignore this zero value and blend it with surrounding values? I have tried dilation and erosion (morph image processing), but I still can't get the right combination. It indeed removed some of the noise, but I just need to get rid of zeros at all points

Image Example:

Depth Image

The zero value is the darkest blue (I'm using colormap)

To illustrate what I want to do, please refer to this poor paint drawing:

Illustration

I want to get rid the black spot (for example black value is 0 or certain value), and blend it with its surround. Yes, I'm able to localized the spot using np.where or the similar function, but I have no idea how to blend it. Maybe a filter to be applied? I need to do this in a stream, so I need a fairly fast process, maybe 10-20 fps will do. Thank you in advance!

Update :

Is there a way other than inpaint? I've looked for various inpaints, but I don't need as sophisticated as impainting. I just need to blend it with simple line, curve, or shape and 1D. I think inpaint is an overkill. Besides, I need them to be fast enough to be used for video stream 10-20 fps, or even better.

like image 852
juliussin Avatar asked May 02 '20 19:05

juliussin


People also ask

How do you find the contrast of an image in Python?

Here is one measure of contrast: Michelson contrast and how to compute it in Python/OpenCV/Numpy. Low contrast is near zero and high contrast is near one. Use the Y (intensity) channel from YUV or YCbCr or alternately the L channel from LAB or even just convert the image to grayscale and use that.


2 Answers

Here is one way to do that in Python/OpenCV.

Use median filtering to fill the holes.

  • Read the input
  • Convert to gray
  • Threshold to make a mask (spots are black)
  • Invert the mask (spots are white)
  • Find the largest spot contour perimeter from the inverted mask and use half of that value as a median filter size
  • Apply median filtering to the image
  • Apply the mask to the input
  • Apply the inverse mask to the median filtered image
  • Add the two together to form the result
  • Save the results

Input:

enter image description here

import cv2
import numpy as np
import math

# read image
img = cv2.imread('spots.png')

# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# threshold 
mask = cv2.threshold(gray,0,255,cv2.THRESH_BINARY)[1]

# erode mask to make black regions slightly larger
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5))
mask = cv2.morphologyEx(mask, cv2.MORPH_ERODE, kernel)


# make mask 3 channel
mask = cv2.merge([mask,mask,mask])

# invert mask
mask_inv = 255 - mask

# get area of largest contour
contours = cv2.findContours(mask_inv[:,:,0], cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
perimeter_max = 0
for c in contours:
    perimeter = cv2.arcLength(c, True)
    if perimeter > perimeter_max:
        perimeter_max = perimeter

# approx radius from largest area
radius = int(perimeter_max/2) + 1
if radius % 2 == 0:
    radius = radius + 1
print(radius)

# median filter input image
median = cv2.medianBlur(img, radius)

# apply mask to image
img_masked = cv2.bitwise_and(img, mask)

# apply inverse mask to median
median_masked = cv2.bitwise_and(median, mask_inv)

# add together
result = cv2.add(img_masked,median_masked)

# save results
cv2.imwrite('spots_mask.png', mask)
cv2.imwrite('spots_mask_inv.png', mask_inv)
cv2.imwrite('spots_median.png', median)
cv2.imwrite('spots_masked.png', img_masked)
cv2.imwrite('spots_median_masked.png', median_masked)
cv2.imwrite('spots_removed.png', result)

cv2.imshow('mask', mask)
cv2.imshow('mask_inv', mask_inv )
cv2.imshow('median', median)
cv2.imshow('img_masked', img_masked)
cv2.imshow('median_masked', median_masked)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()


Threshold image as mask:

enter image description here

Inverted mask:

enter image description here

Median filtered image:

enter image description here

Masked image:

enter image description here

Masked median filtered image:

enter image description here

Result:

enter image description here

like image 166
fmw42 Avatar answered Oct 08 '22 19:10

fmw42


Perhaps using a NaN-adjusted Gaussian filter is good and fast enough? When you consider your zeros/black spots as NaNs, this approach also works for larger black areas.

enter image description here

# import modules
import matplotlib.pyplot as plt
import numpy as np
import skimage
import skimage.filters

# set seed
np.random.seed(42)

# create dummy image
# (smooth for more realisitc appearance)
size = 50
img = np.random.rand(size, size)
img = skimage.filters.gaussian(img, sigma=5)

# create dummy missing/NaN spots
mask = np.random.rand(size, size) < 0.02
img[mask] = np.nan

# define and apply NaN-adjusted Gaussian filter
# (https://stackoverflow.com/a/36307291/5350621)
def nangaussian(U, sigma=1, truncate=4.0):
    V = U.copy()
    V[np.isnan(U)] = 0
    VV = skimage.filters.gaussian(V, sigma=sigma, truncate=truncate)
    W = 0*U.copy()+1
    W[np.isnan(U)] = 0
    WW = skimage.filters.gaussian(W, sigma=sigma, truncate=truncate)
    return VV/WW
smooth = nangaussian(img, sigma=1, truncate=4.0)

# do not smooth full image but only copy smoothed NaN spots
fill = img.copy()
fill[mask] = smooth[mask]

# plot results
vmin, vmax = np.nanmin(img), np.nanmax(img)
aspect = 'auto'
plt.subplot(121)
plt.title('original image (white = NaN)')
plt.imshow(img, aspect=aspect, vmin=vmin, vmax=vmax)
plt.axis('off')
plt.subplot(122)
plt.title('filled image')
plt.imshow(fill, aspect=aspect, vmin=vmin, vmax=vmax)
plt.axis('off')
like image 42
David Avatar answered Oct 08 '22 20:10

David