Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CUDA small kernel 2d convolution - how to do it

I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads).

I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels.

After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution.

Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes.

Whether or not you read everything I wrote, my question is:

how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

like image 712
paulAl Avatar asked Apr 13 '12 17:04

paulAl


1 Answers

You are right in that 3x3 kernel is not suitable for FFT based approach. The best way to deal with this would be to push the kernel into constant memory (or if you are using a fermi+ card, this should not matter too much).

Since you know kernel size, the fastest way to do this would be to read chunks of the input image / signal into shared memory and perform an unrolled multiply and add operation.

--

If you are willing to use libraries to perform this operation ArrayFire and OpenCV have highly optimized Convolution routines that can save you a lot of development time.

I am not too familiar with OpenCV, but in ArrayFire you can do something like the following.

array kernel = array(3, 3, h_kernel, afHost); // Transfer the kernel to gpu
array image  = array(w, h, h_image , afHost); // Transfer the image  to gpu
array result = convolve2(image, kernel);       // Performs 2D convolution

EDIT

The added benefit of using ArrayFire is its batched operation allows you to perform convolution in parallel. You can read about how convolvutions support batch operations over here

For example if you had 10 images that you want to convolve using the same kernel, you could do somehting like the following:

array kernel = array(3, 3, h_kernel, afHost);     // Transfer the kernel to gpu
array images = array(w, h, 10, h_images, afHost); // Transfer the images to gpu
array res    = convolve2(images, kernel); // Perform all operations simultaneously

--

Full Disclosure: I work at AccelerEyes and actively work on ArrayFire.

like image 153
Pavan Yalamanchili Avatar answered Oct 12 '22 13:10

Pavan Yalamanchili