Several users have asked about the speed or memory consumption of image convolutions in numpy or scipy [1, 2, 3, 4]. From the responses and my experience using Numpy, I believe this may be a major shortcoming of numpy compared to Matlab or IDL.
None of the answers so far have addressed the overall question, so here it is: "What is the fastest method for computing a 2D convolution in Python?" Common python modules are fair game: numpy, scipy, and PIL (others?). For the sake of a challenging comparison, I'd like to propose the following rules:
2D Convolutions are instrumental when creating convolutional neural networks or just for general image processing filters such as blurring, sharpening, edge detection, and many more. They are based on the idea of using a kernel and iterating through an input image to create an output image.
The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.
Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Convolution provides a way of `multiplying together' two arrays of numbers, generally of different sizes, but of the same dimensionality, to produce a third array of numbers of the same dimensionality.
It really depends on what you want to do... A lot of the time, you don't need a fully generic (read: slower) 2D convolution... (i.e. If the filter is separable, you use two 1D convolutions instead... This is why the various scipy.ndimage.gaussian
, scipy.ndimage.uniform
, are much faster than the same thing implemented as a generic n-D convolutions.)
At any rate, as a point of comparison:
t = timeit.timeit(stmt='ndimage.convolve(x, y, output=x)', number=1, setup=""" import numpy as np from scipy import ndimage x = np.random.random((2048, 2048)).astype(np.float32) y = np.random.random((32, 32)).astype(np.float32) """) print t
This takes 6.9 sec on my machine...
Compare this with fftconvolve
t = timeit.timeit(stmt="signal.fftconvolve(x, y, mode='same')", number=1, setup=""" import numpy as np from scipy import signal x = np.random.random((2048, 2048)).astype(np.float32) y = np.random.random((32, 32)).astype(np.float32) """) print t
This takes about 10.8 secs. However, with different input sizes, using fft's to do a convolution can be considerably faster (Though I can't seem to come up with a good example, at the moment...).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With