Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Task based programming : #pragma omp task versus #pragma omp parallel for

Tags:

task

openmp

Considering :

    void saxpy_worksharing(float* x, float* y, float a, int N) {
      #pragma omp parallel for
      for (int i = 0; i < N; i++) {
         y[i] = y[i]+a*x[i];
      }
    }

And

    void saxpy_tasks(float* x, float* y, float a, int N) {
      #pragma omp parallel
      {
         for (int i = 0; i < N; i++) {
         #pragma omp task
         {
           y[i] = y[i]+a*x[i];
         }
      }
   }

What is the difference using tasks and the omp parallel directive ? Why can we write recursive algorithms such as merge sort with tasks, but not with worksharing ?

like image 435
user1511956 Avatar asked Oct 25 '12 09:10

user1511956


1 Answers

I would suggest that you have a look at the OpenMP tutorial from Lawrence Livermore National Laboratory, available here.

Your particular example is one that should not be implemented using OpenMP tasks. The second code creates N times the number of threads tasks (because there is an error in the code beside the missing }; I would come back to it later), and each task is only performing a very simple computation. The overhead of tasks would be gigantic, as you can see in my answer to this question. Besides the second code is conceptually wrong. Since there is no worksharing directive, all threads would execute all iterations of the loop and instead of N tasks, N times the number of threads tasks would get created. It should be rewritten in one of the following ways:

Single task producer - common pattern, NUMA unfriendly:

void saxpy_tasks(float* x, float* y, float a, int N) {
   #pragma omp parallel
   {
      #pragma omp single
      {
         for (int i = 0; i < N; i++)
            #pragma omp task
            {
               y[i] = y[i]+a*x[i];
            }
      }
   }
}

The single directive would make the loop run inside a single thread only. All other threads would skip it and hit the implicit barrier at the end of the single construct. As barriers contain implicit task scheduling points, the waiting threads will start processing tasks immediately as they become available.

Parallel task producer - more NUMA friendly:

void saxpy_tasks(float* x, float* y, float a, int N) {
   #pragma omp parallel
   {
      #pragma omp for
      for (int i = 0; i < N; i++)
         #pragma omp task
         {
            y[i] = y[i]+a*x[i];
         }
   }
}

In this case the task creation loop would be shared among the threads.

If you do not know what NUMA is, ignore the comments about NUMA friendliness.

like image 174
Hristo Iliev Avatar answered Oct 21 '22 01:10

Hristo Iliev