Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenMP or MPI or OpenMPI for a distributed memory cluster?

I want to parallelize a C serial code in a 100 node distributed memory cluster. The cluster consists of 25 blades with 4 cores each by infiniband. Before I just used PBS to spread several serial runs of the program between the different nodes. Now I wonder:

  1. What is the best alternative in this case OpenMP or MPI or OpenMPI (at the moment I do not wish to try a mixed approach as I am starting to learn)?
  2. Where can I find examples/tutorials?
  3. For a simple serial code with a main for loop, can always a OpenMP/MPI/OpemMPI perform better than a queueing approach like PBS?
like image 844
Open the way Avatar asked Dec 15 '10 09:12

Open the way


2 Answers

OpenMP is for shared memory computers, i believe you can't use it with distributed memory. So you will have to use MPI.

A good MPI tutorial is: https://computing.llnl.gov/tutorials/mpi/

like image 28
Dr. Snoopy Avatar answered Sep 22 '22 18:09

Dr. Snoopy


Distributed memory kind of rules out OpenMP which is for shared-memory computing. MPI is a standard, and OpenMPI is an implementation of that standard (there are others such as MPICH or LAM-MPI). so

  1. MPI, and OpenMPI is a perfectly respectable implementation thereof. However, I think it's relatively unusual to find such clusters as yours without an MPI installation, so a better choice might be the MPI installation you already have. You should certainly speak to the system's managers about this. And you should certainly not try to install OpenMPI on a cluster without knowing what you are doing.

  2. All over the place. Here's one good place to start.

  3. PBS is a job scheduling system. On a cluster such as yours you would typically have both an installation of MPI and an installation of a job scheduler, if not PBS then Grid Engine is the most likely.

As you've already discovered you can use PBS (or Grid Engine for that matter) to dispatch multiple serial jobs to a cluster. You can also use it to dispatch a single parallel job to a cluster for execution on however many processors you ask for. Your question raises the possibility, though, that your problem is embarassingly parallel and that MPI may be overkill for you. Google around for the term in italics before you commit yourself to parallelising your program -- unless you want to for the sheer enjoyment which will undoubtedly result.

like image 154
High Performance Mark Avatar answered Sep 26 '22 18:09

High Performance Mark