I'm currently working on a library to generate synthetic fingerprints using the SFinGe method (by Maltoni, Maio and Cappelli) link :http://biolab.csr.unibo.it/research.asp?organize=Activities&select=&selObj=12&pathSubj=111%7C%7C12&
One of the steps requires me to apply different gabor filters to an image, each pixel in the image have an orientation and a frequency associated, so the convolution is not done with one kernel over the entire image but the filter must change during the process depending on those attributes of the pixels, that way each pixel on the image is altered in a different way.
If you apply the filters this way, and convolve the image several times(you also have to binarize the image after each convolution) you obtain this:
A master fingerprint, this image took about 20 seconds to be generated (which is way too slow, this is why I want to do it with FFT), since I had to perform 5 times the convolution to complete it (you start from a few random black points).
My filters are 30x30, and the image is 275x400. There are a total of 36000 filters, one for each degree and density (density goes from 0 to 100). I'm planing on reducing the number of filters from 36000 to 9000 since I can cover all the angles with those. Also all the filters are pre-calculated and stored in a filter bank.
This is the source code in C# of the gabor convolution implementation:
This two methods execute the convolution:
/// <summary>
/// Convolve the image with the different filters depending on the orientation and density of the pixel.
/// </summary>
/// <param name="image">The image to be filtered.</param>
/// <param name="directionalMap">The directional map.</param>
/// <param name="densityMap">The density map.</param>
/// <returns></returns>
public double[,] Filter(double[,] image, double[,] directionalMap, double[,] densityMap)
{
int midX = FILTER_SIZE / 2;
int midY = FILTER_SIZE / 2;
double[,] filteredImage = new double[image.GetLength(0), image.GetLength(1)];
double[,] filteredImageWithValuesScaled = new double[image.GetLength(0), image.GetLength(1)];
double[,] finalImage = new double[image.GetLength(0), image.GetLength(1)];
for (int i = 0; i < image.GetLength(0); i++)
for (int j = 0; j < image.GetLength(1); j++)
{
double pixelValue = GetPixelConvolutionValue(image, this.filterBank[(int)Math.Floor((directionalMap[i, j] * 180 / Math.PI))][Math.Round(densityMap[i, j], 2)], i - midX, j - midY);
filteredImage[i, j] = pixelValue;
}
filteredImageWithValuesScaled = this.RescaleValues(filteredImage, 0.0, 255.0);
return filteredImageWithValuesScaled;
}
/// <summary>
/// Gets the pixel convolution value.
/// </summary>
/// <param name="image">The image.</param>
/// <param name="filter">The filter.</param>
/// <param name="sourceX">The source X.</param>
/// <param name="sourceY">The source Y.</param>
/// <returns></returns>
private double GetPixelConvolutionValue(double[,] image, double[,] filter, int sourceX, int sourceY)
{
double result = 0.0;
int totalPixels = 0;
for (int i = 0; i < filter.GetLength(0); i++)
{
if(i + sourceX < 0 || i + sourceX >= image.GetLength(0))
continue;
for (int j = 0; j < filter.GetLength(1); j++)
{
if(j + sourceY < 0 || j + sourceY >= image.GetLength(1))
continue;
double deltaResult = image[sourceX + i,sourceY + j] * filter[i, j];
result += deltaResult;
++totalPixels;
}
}
double filteredValue = result / totalPixels;
return filteredValue;
}
This two methods generate the different gabor filters for the filter bank:
/// <summary>
/// Creates the gabor filter.
/// </summary>
/// <param name="size">The size.</param>
/// <param name="angle">The angle.</param>
/// <param name="wavelength">The wavelength.</param>
/// <param name="sigma">The sigma.</param>
/// <returns></returns>
public double[,] CreateGaborFilter(int size, double angle, double wavelength, double sigma)
{
double[,] filter = new double[size, size];
double frequency = 7 + (100 - (wavelength * 100)) * 0.03;
int windowSize = FILTER_SIZE/2;
for (int y = 0; y < size; ++y)
{
for (int x = 0; x < size; ++x)
{
int dy = -windowSize + y;
int dx = -windowSize + x;
filter[x, y] = GaborFilterValue(dy, dx, frequency, angle, 0, sigma, 0.80);
}
}
return filter;
}
/// <summary>
/// Gabor filter values generation.
/// </summary>
/// <param name="x">The x.</param>
/// <param name="y">The y.</param>
/// <param name="lambda">The wavelength.</param>
/// <param name="theta">The orientation.</param>
/// <param name="phi">The phaseoffset.</param>
/// <param name="sigma">The gaussvar.</param>
/// <param name="gamma">The aspectratio.</param>
/// <returns></returns>
double GaborFilterValue(int x, int y, double lambda, double theta, double phi, double sigma, double gamma)
{
double xx = x * Math.Cos(theta) + y * Math.Sin(theta);
double yy = -x * Math.Sin(theta) + y * Math.Cos(theta);
double envelopeVal = Math.Exp(-((xx * xx + gamma * gamma * yy * yy) / (2.0f * sigma * sigma)));
double carrierVal = Math.Cos(2.0f * (float)Math.PI * xx / lambda + phi);
double g = envelopeVal * carrierVal;
return g;
}
My goal is to reduce this time to under 1 second (There are several programs that do the exactly same thing in such time). So since the direct convolution approach is not working for me I decide to implement the Fast Fourier Transform Convolution, but the problem with this is that FFT applies the same kernel to the entire image at once, and I need to change the kernel per pixel, because each pixel must be altered depending on his attributes (density and orientation). In this post How to apply Gabor wavelets to an image? reve-etrange explains how to apply different gabor filters to an image, but the thing is that the way he does it, is applying the different filters to the whole image and then sum the responses , and what I need is the responses from different pixels to the different filters.
This is whats happening when I convolve one filter with the image (using FFT):
This was the filter used:
And this was the image it was convolved with:
This is the algorithm in C# of the FFT implementation:
/// <summary>
/// Convolve the image using FFT.
/// </summary>
/// <param name="image">The image to be filtered.</param>
/// <param name="directionalMap">The directional map.</param>
/// <param name="densityMap">The density map.</param>
/// <param name="FFT">if set to <c>true</c> [FFT].</param>
/// <returns></returns>
public double[,] Filter(double[,] image, double[,] directionalMap, double[,] densityMap, bool FFT)
{
double[,] filter = null;
double[,] paddedFilter = null;
double[,] paddedImage = null;
double[,] croppedImage = null;
double[,] filteredImage = new double[image.GetLength(0), image.GetLength(1)];
double[,] filteredImageWithValuesScaled = new double[image.GetLength(0), image.GetLength(1)];
double[,] finalImage = new double[image.GetLength(0), image.GetLength(1)];
filter = this.filterBank[70][0];
paddedFilter = PadImage(filter, 512, 512, 0, 0); // Pad the filter to have a potency of 2 dimensions.
paddedImage = PadImage(image, 512, 512, 0, 0); // Pad the image to have a potency of 2 dimensions.
FFT fftOne = new FFT(paddedImage);
FFT fftTwo = new FFT(paddedFilter);
fftOne.ForwardFFT();
fftTwo.ForwardFFT();
FFT result = fftOne * fftTwo;
result.InverseFFT();
filteredImage = result.GreyImage;
filteredImageWithValuesScaled = this.RescaleValues(filteredImage, 0.0, 255.0);
croppedImage = CropImage(filteredImageWithValuesScaled, image.GetLength(0), image.GetLength(1));
return croppedImage;
}
So what I'm asking is, how do you get the response from different pixels to different kernels with FFT? If this is not possible, is there a way to improve my direct convolution to make it at least 20 times faster?
Also would it be possible to make one kernel using all the filters, so I can apply those to the whole image?
In image processing, a Gabor filter, named after Dennis Gabor, is a linear filter used for texture analysis, which essentially means that it analyzes whether there is any specific frequency content in the image in specific directions in a localized region around the point or region of analysis.
Apply Single Gabor Filter to Input Image Apply a Gabor filter to the image. wavelength = 4; orientation = 90; [mag,phase] = imgaborfilt(I,wavelength,orientation); Display the original image with plots of the magnitude and phase response calculated by the Gabor filter.
It creates a u by v cell array, whose elements are m by n matrices; each matrix being a 2-D Gabor filter. The second function named "gaborFeatures. m" extracts the Gabor features of an input image. It creates a column vector, consisting of the Gabor features of the input image.
Convolution filters are filters (multi-dimensional data) used in Convolution layer which helps in extracting specific features from input data. There are different types of Filters like Gaussian Blur, Prewitt Filter and many more which we have covered along with basic idea.
I found a way to convolve the image with different gabor filters and gather the responses of the pixels base on their local characteristics using FFT.
This is call contextual filtering, usually when you filter an image you only apply a single kernel to the entire thing, but in contextual filtering the characteristics of the filter change according to the local context, in this case, density and orientation of the pixel.
In direct convolution the process is pretty straight forward, you simply change the kernel at each step of the convolution, but in FFT convolution since the kernel is applied to the image in the frequency domain you can't change the filter properties during the process. So the way you do it is by making the convolution of the image with each filter separately, this will give N number of filtered images where N is the numbers of filters in your filter bank, then you have to construct your final image taking information from the different filtered images based on the context of the pixel you are recreating.
So for each pixel, you look at his orientation and density properties and then grab the value of that pixel position from the filtered image that was generated by the convolving the original image with the filter with those same properties. Here is an example of the process:
This is a graphic representation of the directional map.
I used the same density for all pixels to reduce the amount of filters generated.
This is the source image:
This is an example of three of the filters used (0 degrees, 45 degrees, 90 degrees):
These are three examples of the source image being convolve with the different filters at different degrees:
And finally this is the resulting image, the image was created taking the values of the pixels from the different filtered images base on the direction and density of the pixel.
This process is A LOT slower than the direct convolution =(, since you have to convolve the original image with all the filters first. The final image took about a minute to be generated. So far I'm stuck with the direct convolution it seems =/.
Thanks for reading.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With