Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

parallel C code for distance computation

I've a C code which compute the distance between two sets of nodes (three coordinate each), even though my code has been fast enough yet, I want to boost it up a bit more using parallel computing. I've already found some info about openMP and I'm trying to use it right now, but there's something a bit weird. Without omp the code cpu time is 20s, adding the two pragma lines it takes 160s! How could it happen?

I append my code down here

float computedist(float **vG1, float **vG2, int ncft, int ntri2, int jump, float *dist){
    int k = 0, i, j;
    float min = 0;
    float max = 0;
    float avg = 0;
    float *d = malloc(3*sizeof(float));
    float diff;

    #pragma omp parallel
    for(i=0;i<ncft;i+=jump){
        #pragma omp parallel
        for(j=0;j<ntri2;j++){
            d[0] = vG1[i][0] - vG2[j][0];
            d[1] = vG1[i][1] - vG2[j][1];
            d[2] = vG1[i][2] - vG2[j][2];
            diff = sqrt(pow(d[0],2) + pow(d[1],2) + pow(d[2],2));
            if(j==0)
                dist[k] = diff;
            else
                if(diff<dist[k])
                    dist[k] = diff;

        }
        avg += dist[k];
        if(dist[k]>max)
            max = dist[k];
        k++;
    }

    printf("max distance: %f\n",max);
    printf("average distance: %f\n",avg/(int)(ncft/jump));

    free(d);

    return max;
}

Thank you so much for any help

like image 699
Nicholas Avatar asked Jan 30 '12 09:01

Nicholas


2 Answers

(The answer below refers to the initial code in the question, which since then was improved with applying these suggestions)


You need to read more about how to use OpenMP. The specification is available at http://www.openmp.org; and there are links to tutorials and other resources.

I will point out some issues in your code and give recommendations how to fix those.

    float *d = malloc(3*sizeof(float));
    float diff;

d is used as a temporary variable, so should be marked as private in #pragma omp parallel for (see below) to avoid data races. Meanwhile, instead of dynamic allocation, I would just use 3 separate floats. diff also holds a temporary value, so should also be private.

    #pragma omp parallel
    for(i=0;i<ncft;i+=jump){
        #pragma omp parallel
        for(j=0;j<ntri2;j++){

You created a parallel region where each thread executes the whole loop (because the region does not contain any work sharing constructs), and inside it you created a nested region with a new(!) set of threads, also each executing the whole inner loop. It adds lots of overhead and unnecessary computation to your program. What you need is #pragma omp parallel for, and only applied to the outer loop.

            d[0] = vG1[i][0] - vG2[j][0];
            d[1] = vG1[i][1] - vG2[j][1];
            d[2] = vG1[i][2] - vG2[j][2];
            diff = sqrt(pow(d[0],2) + pow(d[1],2) + pow(d[2],2));

Not related to parallelism, but why calling pow just to compute squares? A good old multiplication would likely be both simpler to read and faster.

            if(j==0)
                dist[k] = diff;
            else
                if(diff<dist[k])
                    dist[k] = diff;

Since the action is the same (dist[k]=diff;), the code can be simplified by combining two conditions with || (logical or).

        }
        avg += dist[k];
        if(dist[k]>max)
            max = dist[k];

Here you compute aggregate values across the outer loop. In OpenMP, this is done with reduction clause of #pragma omp for.

        k++;
    }

Currently, you increment k at each iteration, thus creating an unnecessary dependency between iterations that leads to a data race in parallel code. According to your code, k is just a convenient "alias" for i/jump - so just assign it as such at the beginning of the iteration, and make private.

like image 174
Alexey Kukanov Avatar answered Nov 06 '22 08:11

Alexey Kukanov


You use a lot of synchronization when you add #pragma omp parallel on both outer loop and inner loop.

When using #pragma omp parallel, there is a barrier after the loop, so all threads wait until the last thread finishes.
In your case, you have to wait for all threads both in inner loop and both in outer loop, thus you get a lot of overhead for using syncrhonization.

It is usually best to use #pragma omp parallel only on the outer loop [assuming there is enough work there to be done...] to minimize the amount of barriers.

like image 26
amit Avatar answered Nov 06 '22 07:11

amit