Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multi-Core Programming. Boost's MPI, OpenMP, TBB, or something else?

I am totally a novice in Multi-Core Programming, but I do know how to program C++.

Now, I am looking around for Multi-Core Programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB.

For anyone who have experienced with any of these 3 API (or any other API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

like image 774
Karl Avatar asked May 23 '10 10:05

Karl


People also ask

Is MPI better than OpenMP?

If you have a problem that is small enough to be run on just one node, use OpenMP. If you know that you need more than one node (and thus definitely need MPI), but you favor code readability/effort over performance, use only MPI.

What is MPI and OpenMP?

• OpenMP (shared memory) – Parallel programming on a single node. • MPI (distributed memory) – Parallel computing running on multiple nodes.

Can you use OpenMP and MPI together?

MPI and OpenMP can be used at the same time to create a Hybrid MPI/OpenMP program.

What is multithreading approach used in OpenMP?

OpenMP is an implementation of multithreading, a method of parallelizing whereby a primary thread (a series of instructions executed consecutively) forks a specified number of sub-threads and the system divides a task among them.


1 Answers

As a starting point I'd suggest OpenMP. With this you can very simply do three basic types of parallelism: loops, sections, and tasks.

Parallel loops

These allow you to split loop iterations over multiple threads. For instance:

#pragma omp parallel for
for (int i=0; i<N; i++) {...}

If you were using two threads, then the first thread would perform the first half of the iteration. The second thread would perform the second half.

Sections

These allow you to statically partition the work over multiple threads. This is useful when there is obvious work that can be performed in parallel. However, it's not a very flexible approach.

#pragma omp parallel sections
{
  #pragma omp section
  {...}
  #pragma omp section
  {...}
}

Tasks

Tasks are the most flexible approach. These are created dynamically and their execution is performed asynchronously, either by the thread that created them, or by another thread.

#pragma omp task
{...}

Advantages

OpenMP has several things going for it.

  • Directive-based: the compiler does the work of creating and synchronizing the threads.

  • Incremental parallelism: you can focus on just the region of code that you need to parallelise.

  • One source base for serial and parallel code: The OpenMP directives are only recognized by the compiler when you run it with a flag (-fopenmp for gcc). So you can use the same source base to generate both serial and parallel code. This means you can turn off the flag to see if you get the same result from the serial version of the code or not. That way you can isolate parallelism errors from errors in the algorithm.

You can find the entire OpenMP spec at http://www.openmp.org/

like image 94
Darryl Gove Avatar answered Oct 07 '22 09:10

Darryl Gove