Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CUDA parallelizing a nested for loop

I am new to CUDA. I am trying to parallelize the following code. Right now it's sitting on kernel but is not using threads at all, thus slow. I tried to use this answer but to no avail so far.

The kernel is supposed to generate first n prime numbers, put them into device_primes array and this array is later accessed from host. The code is correct and works fine in serial version but I need to speed it up, perhaps with use of shared memory.

//CUDA kernel code
__global__ void generatePrimes(int* device_primes, int n) 
{
//int i = blockIdx.x * blockDim.x + threadIdx.x;
//int j = blockIdx.y * blockDim.y + threadIdx.y;

int counter = 0;
int c = 0;

for (int num = 2; counter < n; num++)
{       
    for (c = 2; c <= num - 1; c++)
    { 
        if (num % c == 0) //not prime
        {
            break;
        }
    }
    if (c == num) //prime
    {
        device_primes[counter] = num;
        counter++;
    }
}
}

My current, preliminary, and definitely wrong attempt to parallelize this looks like the following:

//CUDA kernel code
__global__ void generatePrimes(int* device_primes, int n) 
{
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    int j = blockIdx.y * blockDim.y + threadIdx.y;
    int num = i + 2; 
    int c = j + 2;
    int counter = 0;

    if ((counter >= n) || (c > num - 1))
    {
        return;
    }
    if (num % c == 0) //not prime
    {
    
    }
    if (c == num) //prime
    {
       device_primes[counter] = num;
       counter++;
    }
    num++;
    c++;
}

But this code populates the array with data that does not make sense. In addition, many values are zeroes. Thanks in advance for any help, it's appreciated.

like image 229
Nikita K Avatar asked Nov 04 '12 02:11

Nikita K


1 Answers

You have some problems in your code, for example:

int num = i + 2;

This expression assigns to the thread 0 the interaction 2, to thread 1 the iteration 3, and so on. The problem is that the next iteration that the threads will compute is based on the expression num++;. Consequently, thread 0 will compute next the iteration 3, which was already computed by thread 1. Thus, leading to redundant computation. Furthermore, I think for this problem it would be easier to use only one dimension instead of two (x,y). So with this in mind you have to change num++ to:

num += blockDim.x * gridDim.x;

Another issue is that you did not take into consideration that the variable counter has to be shared among threads. Otherwise, each thread will try to find 'n' primes, and all of them will populate the entire array. So you have to change int counter = 0; to a shared or global variable. Let us use a global variable so that it can be visible among all the threads from all the blocks. We can use the position zero of the array device_primes to hold the variable counter.

You also have to initialize this value. Let us assign this job to only one thread, namely the thread with `id = 0, so:

if (thread_id == 0) device_primes[0] = 1;

However, this variable is global and it will be written by all threads. Therefore, we must guarantee that all threads, before writing on that global variable, will see that the variable counter is 1 (first position of device_primes with primes, the zero is for the counter) so you have to add also a barrier in the end , so:

if (thread_id == 0) 
    device_primes[0] = 1;
__syncthreads()

So a possible solution (albeit, an inefficient one):

__global__ void getPrimes(int *device_primes,int n)
{ 
    int c = 0;
    int thread_id = blockIdx.x * blockDim.x + threadIdx.x;
    int num = thread_id;

    if (thread_id == 0) device_primes[0] = 1;
    __syncthreads();

    while(device_primes[0] < n)
    {

        for (c = 2; c <= num - 1; c++)
        { 
            if (num % c == 0) //not prime
            {
                break;
            }
        }

        if (c == num) //prime
        {
            int pos = atomicAdd(&device_primes[0],1);
            device_primes[pos] = num;

        }

        num += blockDim.x * gridDim.x; // Next number for this thread       
    }
}

The following line atomicAdd(&device_primes[0], 1); will basically perform device_primes[0]++;. We are using an atomic operation because the variable counter is global and we need to guarantee mutual exclusion. Note, that you may have to compile with the flag -arch sm_20.

Optimization: Code-wise, it would be better the use of an approach with less/no synchronization. Moreover, the number of computations could also be reduced by taking into account some of the properties of prime numbers as it is show case in http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes.

like image 130
dreamcrash Avatar answered Oct 13 '22 21:10

dreamcrash