Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can avoid including zeros when using the opencv Filter2D function?

Tags:

python

opencv

I am using the Filter2D function in opencv-python on a satellite image that have several black-fill values (zeros) around the edges. Here you can find an example of what I am talking about: https://landsat.gsfc.nasa.gov/wp-content/uploads/2013/06/truecolor.jpg

When I use Filter2D on that image, the pixels from the black-filled area are considered valid values, creating edge artifacts. How can I not include the zeros in the calculation? For example, in IDL I can use the "missing" and "invalid" fields like this:

output = CONVOL(input, kernel, /EDGE_TRUNCATE, MISSING=0.0, INVALID=0.0, /NAN, /NORMALIZE)

and avoid issues at the edges,but I cannot find a similar functionality in opencv. How can get around this issue?

like image 495
Knulph Avatar asked Dec 04 '25 13:12

Knulph


1 Answers

There is no mask or ignore parameters in the OpenCV function. However, the only artifacts will be outside the image, i.e., in the black region. Whenever the anchor of the filter (by default, the middle pixel) is on the edge but over a black pixel, it will add the filtered result to that pixel. But when the anchor is over the top of the image, the black values won't add anything to your filter. So, a simple solution is to create a mask with the black values and remove those from your filtered image.

Edit: Ok so from the IDL convol docs:

Tip: The use of the INVALID keyword is equivalent to treating those values as 0.0 when computing the convolution sum. You can use the NORMALIZE keyword to exclude these points entirely.

Tip: If NORMALIZE is set and your input array has missing data (the INVALID or NAN keywords are set), then for each result value the scale factor and bias are computed using only those kernel values that contributed to that result value. This ensures that all result values are comparable in magnitude, regardless of any missing data.

So from here we see that the points are "excluded" by treating the invalid pixels as 0, but then biasing the sum by dividing by a different number of pixels than the kernel size (namely, the number of pixels that are valid).

This isn't possible in OpenCV, at least not with the built-in filtering methods, because OpenCV does not normalize the filtered result. See in the docs for filter2D() that the equation is just simple correlation, and there is no division.

Now, what you could do is manually normalize. This isn't too hard. If you created a mask, where the values were 1 inside the normal image, and 0 outside the image, then a boxFilter() with the same kernel size as your filter2D() would produce the counts of pixels inside the kernel window at each location. This would be one way to normalize the image. Then you can simply mask this boxFilter() result, so that the values outside the image bounds are ignored, and then finally, divide your filter2D() result by the masked boxFilter() result (ignoring where the boxFilter() result is 0 so you don't divide by zero). That should do exactly what you want.

Edit2: So here's a concrete example. First I'll define a simple image (7x7 with a 5x5 inner square of 1s):

import cv2
import numpy as np

img = np.array([
    [0, 0, 0, 0, 0, 0, 0],
    [0, 1, 1, 1, 1, 1, 0],
    [0, 1, 1, 1, 1, 1, 0],
    [0, 1, 1, 1, 1, 1, 0],
    [0, 1, 1, 1, 1, 1, 0],
    [0, 1, 1, 1, 1, 1, 0],
    [0, 0, 0, 0, 0, 0, 0]], dtype=np.float32)

And we'll keep to a simple filtering example with a Gaussian kernel:

gauss_kernel = np.array([
    [1/16, 1/8, 1/16],
    [1/8, 1/4, 1/8],
    [1/16, 1/8, 1/16]], dtype=np.float32)

Now first, filter the image...

filtered = cv2.filter2D(img, -1, gauss_kernel)
print(filtered)

[[ 0.25    0.375   0.5     0.5     0.5     0.375   0.25  ]
 [ 0.375   0.5625  0.75    0.75    0.75    0.5625  0.375 ]
 [ 0.5     0.75    1.      1.      1.      0.75    0.5   ]
 [ 0.5     0.75    1.      1.      1.      0.75    0.5   ]
 [ 0.5     0.75    1.      1.      1.      0.75    0.5   ]
 [ 0.375   0.5625  0.75    0.75    0.75    0.5625  0.375 ]
 [ 0.25    0.375   0.5     0.5     0.5     0.375   0.25  ]]

And this is what we expected...the Gaussian blur of a bunch of 1s should be well, 1s. And then we have some decay at the edges, both inside the image and outside in the zero area. So we'll create a mask; in this case it's just the same as the image. And then we'll do a box filter over the mask to get the correct scaling values:

mask = img.copy()  # in this case, the mask is identical
scaling_vals = cv2.boxFilter(mask, -1, gauss_kernel.shape, borderType=cv2.BORDER_CONSTANT)
print(scaling_vals)

[[ 0.111  0.222  0.333  0.333  0.333  0.222   0.111]
 [ 0.222  0.444  0.666  0.666  0.666  0.444   0.222]
 [ 0.333  0.666  1.     1.     1.     0.666   0.333]
 [ 0.333  0.666  1.     1.     1.     0.666   0.333]
 [ 0.333  0.666  1.     1.     1.     0.666   0.333]
 [ 0.222  0.444  0.666  0.666  0.666  0.444   0.222]
 [ 0.111  0.222  0.333  0.333  0.333  0.222   0.111]]

Note that if you multiply this by 9 (the number of vals in the kernel) then we'll get the exact "number of non-zero pixels around a pixel location". So this is our normalizing scale factor. Now all there is to do is...normalize and remove the stuff outside the image borders.

mask = mask.astype(bool)  # turn mask bool for indexing
normalized_filter = filtered.copy()
normalized_filter[mask] /= scaling_vals[mask]
normalized_filter[~mask] = 0
print(normalized_filter)

[[ 0.  0.     0.     0.     0.     0.     0. ]
 [ 0.  1.265  1.125  1.125  1.125  1.265  0. ]
 [ 0.  1.125  1.     1.     1.     1.125  0. ]
 [ 0.  1.125  1.     1.     1.     1.125  0. ]
 [ 0.  1.125  1.     1.     1.     1.125  0. ]
 [ 0.  1.265  1.125  1.125  1.125  1.265  0. ]
 [ 0.  0.     0.     0.     0.     0.     0. ]]

Now these values aren't perfect, but neither is the biased sum. Note even in the IDL docs they state:

you should use caution when analyzing these values, as the result may be biased by having fewer points within the kernel.

So you're not going to get perfect results by scaling like this. But, we can do better! The scaling factor we used only used the number of points, but not the actual weight associated to each of those points. For that, we can filter the mask with the associated weights. In other words, just run filter2D() over our mask instead of the image. Obviously, dividing our image by this just turns all the values into 1; and then we mask. And we're done! Don't get confused by this example since the mask and the image are identical---dividing the filtered image by the filtered mask in this case gives us 1, but in general, it's just a better scaling than the box filter.

mask = img.copy()
scaling_vals = cv2.filter2D(mask, -1, gauss_kernel)

mask = mask.astype(bool)  # turn mask bool for indexing
normalized_filter = filtered.copy()
normalized_filter[mask] /= scaling_vals[mask]
normalized_filter[~mask] = 0
print(normalized_filter)

[[ 0.  0.  0.  0.  0.  0.  0.]
 [ 0.  1.  1.  1.  1.  1.  0.]
 [ 0.  1.  1.  1.  1.  1.  0.]
 [ 0.  1.  1.  1.  1.  1.  0.]
 [ 0.  1.  1.  1.  1.  1.  0.]
 [ 0.  1.  1.  1.  1.  1.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.]]
like image 178
alkasm Avatar answered Dec 06 '25 06:12

alkasm



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!