I would like to take two images and convolve them together in Matlab using the 2D FFT without recourse to the conv2
function. However, I am uncertain with respect to how the matrices should be properly padded and prepared for the convolution.
The mathematical operation is the following:
A * B = C
In the above, * is the convolution operator (Wikipedia link).
The following Matlab program shows the difference between padding and not padding the matrices. I suspect that not padding the matrices results in a circular convolution, but I would like to perform a linear convolution without aliasing.
If I do pad the two matrices, then how do I truncate the output of the convolution so that C is the same size as A and B?
A = rgb2gray(im2double(imread('1.png'))); % input A
B = rgb2gray(im2double(imread('2.png'))); % kernel B
figure;
imagesc(A); colormap gray;
title ('A')
figure;
imagesc(B); colormap gray;
title ('B')
[m,n] = size(A);
mm = 2*m - 1;
nn = 2*n - 1;
C = (ifft2(fft2(A,mm,nn).* fft2(B,mm,nn)));
figure;
imagesc(C); colormap gray;
title ('C with padding')
C0 = (ifft2(fft2(A).* fft2(B)));
figure;
imagesc(C0); colormap gray;
title ('C without padding')
Here is the output of the program:
Without padding the result will be equivalent to circular convolution as you point out. For linear convolution, in convolving 2 images (2D signals) A*B the full output will be of size Ma+Mb-1 x Na+Nb-1
, where Ma x Na, Mb x Nb
the sizes of images A and B resp.
After padding to the expected size, multiplying and transforming back, via ifft2
, you can keep the central part of the resulting image (usually corresponding to the largest one of A and B).
A = double(imread('cameraman.tif'))./255; % image
B = fspecial('gaussian', [15 15], 2); % some 2D filter function
[m,n] = size(A);
[mb,nb] = size(B);
% output size
mm = m + mb - 1;
nn = n + nb - 1;
% pad, multiply and transform back
C = ifft2(fft2(A,mm,nn).* fft2(B,mm,nn));
% padding constants (for output of size == size(A))
padC_m = ceil((mb-1)./2);
padC_n = ceil((nb-1)./2);
% frequency-domain convolution result
D = C(padC_m+1:m+padC_m, padC_n+1:n+padC_n);
figure; imshow(D,[]);
Now, compare the above with doing spatial-domain convolution, using conv2D
% space-domain convolution result
F = conv2(A,B,'same');
figure; imshow(F,[]);
Results are visually the same, and total error between the two (due to rounding) on the order of e-10.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With