I'm trying to reproduce the behaviour of the following MATLAB code in Python:
% Matlab code
wavelength = 10
orientation = 45
image = imread('filename.tif') % grayscale image
[mag,phase] = imgaborfilt(image, wavelength, orientation)
gabor_im = mag .* sin(phase)
Unfortunately, I don't have a license and cannot run the code. Also, the official Matlab documentation of imgaborfilt does not specify precisely what the functions do.
For lack of an obvious alternative, I'm trying to use OpenCV in Python (open to other suggestions). I have no experience working with OpenCV. I'm trying to use cv2.getGaborKernel
and cv2.filter2D
. I could not find detailed documentation of the behaviour of these functions, either. Afaik there is no official documentation of the Python wrapper for OpenCV. The docstrings of the functions provide some information but it is incomplete and imprecise.
I found this question, where OpenCV is used in C++ to solve the problem. I assume the functions work in a very similar way (also note the official C++ documentation). However, they have a number of additional parameters. How can I find out what the matlab functions really do to reproduce the behaviour?
# python 3.6
import numpy as np
import cv2
wavelength = 10
orientation = 45
shape = (500, 400) # arbitrary values to get running example code...
sigma = 100 # what to put for Matlab behaviour?
gamma = 1 # what to put for Matlab behaviour?
gabor_filter = cv2.getGaborKernel(shape, sigma, orientation, wavelength, gamma)
print(gabor_filter.shape) # =(401, 501). Why flipped?
image = np.random.random(shape) # create some random data.
out_buffer = np.zeros(shape)
destination_depth = -1 # like dtype for filter2D. Apparantly, -1="same as input".
thing = cv2.filter2D(image, destination_depth, gabor_filter, out_buffer)
print(out_buffer.shape, out_buffer.dtype, out_buffer.max()) # =(500, 400) float64 65.2..
print(thing.shape, thing.dtype, thing.max()) # =(500, 400) float64 65.2..
EDIT:
After receiving the great answer by Cris Luengo, I used it to make two functions, using OpenCV and scikit-image respectively, to (hopefully) reproduce MATLAB imgaborfit function behaviour. I include them here. Note that the scikit implementation is a lot slower than OpenCV.
I still have further questions about these functions:
import numpy as np
import math
import cv2
def gaborfilt_OpenCV_likeMATLAB(image, wavelength, orientation, SpatialFrequencyBandwidth=1, SpatialAspectRatio=0.5):
"""Reproduces (to what accuracy in what MATLAB version??? todo TEST THIS!) the behaviour of MATLAB imgaborfilt function using OpenCV."""
orientation = -orientation / 180 * math.pi # for OpenCV need radian, and runs in opposite direction
sigma = 0.5 * wavelength * SpatialFrequencyBandwidth
gamma = SpatialAspectRatio
shape = 1 + 2 * math.ceil(4 * sigma) # smaller cutoff is possible for speed
shape = (shape, shape)
gabor_filter_real = cv2.getGaborKernel(shape, sigma, orientation, wavelength, gamma, psi=0)
gabor_filter_imag = cv2.getGaborKernel(shape, sigma, orientation, wavelength, gamma, psi=math.pi / 2)
filtered_image = cv2.filter2D(image, -1, gabor_filter_real) + 1j * cv2.filter2D(image, -1, gabor_filter_imag)
mag = np.abs(filtered_image)
phase = np.angle(filtered_image)
return mag, phase
import numpy as np
import math
from skimage.filters import gabor
def gaborfilt_skimage_likeMATLAB(image, wavelength, orientation, SpatialFrequencyBandwidth=1, SpatialAspectRatio=0.5):
"""TODO (does not quite) reproduce the behaviour of MATLAB imgaborfilt function using skimage."""
sigma = 0.5 * wavelength * SpatialFrequencyBandwidth
filtered_image_re, filtered_image_im = gabor(
image, frequency=1 / wavelength, theta=-orientation / 180 * math.pi,
sigma_x=sigma, sigma_y=sigma/SpatialAspectRatio, n_stds=5,
)
full_image = filtered_image_re + 1j * filtered_image_im
mag = np.abs(full_image)
phase = np.angle(full_image)
return mag, phase
Code to test above functions:
from matplotlib import pyplot as plt
import numpy as np
def show(im, title=""):
plt.figure()
plt.imshow(im)
plt.title(f"{title}: dtype={im.dtype}, shape={im.shape},\n max={im.max():.3e}, min= {im.min():.3e}")
plt.colorbar()
image = np.zeros((400, 400))
image[200, 200] = 1 # a delta impulse image to visualize the filtering kernel
wavelength = 10
orientation = 33 # in degrees (for MATLAB)
mag_cv, phase_cv = gaborfilt_OpenCV_likeMATLAB(image, wavelength, orientation)
show(mag_cv, "mag") # normalized by maximum, non-zero noise even outside filter window region
show(phase_cv, "phase") # all over the place
mag_sk, phase_sk = gaborfilt_skimage_likeMATLAB(image, wavelength, orientation)
show(mag_sk, "mag skimage") # small values, zero outside filter region
show(phase_sk, "phase skimage") # and hence non-zero only inside filter window region
show(mag_cv - mag_sk/mag_sk.max(), "cv - normalized(sk)") # approximately zero-image.
show(phase_sk - phase_cv, "phase_sk - phase_cv") # phases do not agree at all! Not even in the window region!
plt.show()
The documentation for both MATLAB's imgaborfilt
and OpenCV's getGaborKernel
are almost complete enough to be able to do a 1:1 translation. Only a little bit of experimentation is needed to figure out how to translate MATLAB's "SpatialFrequencyBandwidth
" to a sigma for the Gaussian envelope.
One thing that I've noticed here is that OpenCV's implementation of Gabor filtering seems to indicate that Gabor filters are not well understood. A quick Google exercise demonstrates that the most popular tutorials for Gabor filtering in OpenCV do not properly understand Gabor filters.
The Gabor filter, as can be learned for example from the same Wikipedia page that OpenCV's documentation links to, is a complex-valued filter. The result of applying it to an image is therefore also complex-valued. MATLAB correctly returns the magnitude and phase of the complex result, rather than the complex-valued image itself, since it is mostly the magnitude that is of interest. The magnitude of the Gabor filter indicates which parts of an image have frequencies of the given wavelength and orientation.
For example, one can apply a Gabor filter to this image (left) to yield this result (right) (this is the magnitude of the complex-valued output):
However, OpenCV's filtering seems to be strictly real-valued. It is possible to build a real-valued component of the Gabor filter kernel with an arbitrary phase. Gabor's filter has a real component with 0 phase and an imaginary component with π/2 phase (that is, the real component is even and the imaginary component is odd). Combining the even and the odd filters allows one to analyze a signal with arbitrary phase, creating filters with other phases is unnecessary.
To replicate the following MATLAB code:
image = zeros(64,64);
image(33,33) = 1; % a delta impulse image to visualize the filtering kernel
wavelength = 10;
orientation = 30; # in degrees
[mag,phase] = imgaborfilt(image, wavelength, orientation);
% defaults: 'SpatialFrequencyBandwidth'=1; 'SpatialAspectRatio'=0.5
in Python with OpenCV one would need to do:
import cv2
import numpy as np
import math
image = np.zeros((64, 64))
image[32, 32] = 1 # a delta impulse image to visualize the filtering kernel
wavelength = 10
orientation = -30 / 180 * math.pi # in radian, and seems to run in opposite direction
sigma = 0.5 * wavelength * 1 # 1 == SpatialFrequencyBandwidth
gamma = 0.5 # SpatialAspectRatio
shape = 1 + 2 * math.ceil(4 * sigma) # smaller cutoff is possible for speed
shape = (shape, shape)
gabor_filter_real = cv2.getGaborKernel(shape, sigma, orientation, wavelength, gamma, psi=0)
gabor_filter_imag = cv2.getGaborKernel(shape, sigma, orientation, wavelength, gamma, psi=math.pi/2)
gabor = cv2.filter2D(image, -1, gabor_filter_real) + 1j * cv2.filter2D(image, -1, gabor_filter_imag)
mag = np.abs(gabor)
phase = np.angle(gabor)
Do note that it is important for the input image to be of a floating-point type, otherwise the computation result will be cast to a type that cannot represent all the values needed to represent the result of the Gabor filter.
The last line of the code in the OP is
gabor_im = mag .* sin(phase)
This is, to me, very strange and I wonder what this code was used for. What it accomplishes is obtaining the result of the imaginary component of the Gabor filter:
gabor_im = cv2.filter2D(image, -1, gabor_filter_imag)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With