Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fastest 2D convolution or image filter in Python

Tags:

Several users have asked about the speed or memory consumption of image convolutions in numpy or scipy [1, 2, 3, 4]. From the responses and my experience using Numpy, I believe this may be a major shortcoming of numpy compared to Matlab or IDL.

None of the answers so far have addressed the overall question, so here it is: "What is the fastest method for computing a 2D convolution in Python?" Common python modules are fair game: numpy, scipy, and PIL (others?). For the sake of a challenging comparison, I'd like to propose the following rules:

  1. Input matrices are 2048x2048 and 32x32, respectively.
  2. Single or double precision floating point are both acceptable.
  3. Time spent converting your input matrix to the appropriate format doesn't count -- just the convolution step.
  4. Replacing the input matrix with your output is acceptable (does any python library support that?)
  5. Direct DLL calls to common C libraries are alright -- lapack or scalapack
  6. PyCUDA is right out. It's not fair to use your custom GPU hardware.
like image 841
Carl F. Avatar asked Apr 19 '11 02:04

Carl F.


People also ask

What is also known as 2D convolution in Python?

2D Convolutions are instrumental when creating convolutional neural networks or just for general image processing filters such as blurring, sharpening, edge detection, and many more. They are based on the idea of using a kernel and iterating through an input image to create an output image.

What is 2D convolution in image processing?

The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.

Why convolution is used in images?

Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Convolution provides a way of `multiplying together' two arrays of numbers, generally of different sizes, but of the same dimensionality, to produce a third array of numbers of the same dimensionality.


1 Answers

It really depends on what you want to do... A lot of the time, you don't need a fully generic (read: slower) 2D convolution... (i.e. If the filter is separable, you use two 1D convolutions instead... This is why the various scipy.ndimage.gaussian, scipy.ndimage.uniform, are much faster than the same thing implemented as a generic n-D convolutions.)

At any rate, as a point of comparison:

t = timeit.timeit(stmt='ndimage.convolve(x, y, output=x)', number=1, setup=""" import numpy as np from scipy import ndimage x = np.random.random((2048, 2048)).astype(np.float32) y = np.random.random((32, 32)).astype(np.float32) """) print t 

This takes 6.9 sec on my machine...

Compare this with fftconvolve

t = timeit.timeit(stmt="signal.fftconvolve(x, y, mode='same')", number=1, setup=""" import numpy as np from scipy import signal x = np.random.random((2048, 2048)).astype(np.float32) y = np.random.random((32, 32)).astype(np.float32) """) print t 

This takes about 10.8 secs. However, with different input sizes, using fft's to do a convolution can be considerably faster (Though I can't seem to come up with a good example, at the moment...).

like image 83
Joe Kington Avatar answered Oct 01 '22 05:10

Joe Kington