Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MPI on a multicore machine

Tags:

mpi

My situation is quite simple: I want to run a MPI-enabled software on a single multiprocessor/core machine, let's say 8.

My implementation of MPI is MPICH2.

As I understand I have a few options:

$ mpiexec -n 8 my_software

$ mpiexec -n 8 -hosts {localhost:8} my_software

or I could also specify Hydra to "fork" and not "ssh";

$ mpiexec -n 8 -launcher fork my_software

Could you tell me if there will be any differences or if the behavior will be the same ?

Of course as all my nodes will be on the same machine I don't want "message passing" to be done through the network (even the local loop) but through shared memory. As I understood MPI will figure that out itself and that will be the case for all the three options.

like image 832
Cedric H. Avatar asked Mar 08 '12 14:03

Cedric H.


People also ask

Can MPI be run on a single processor?

MPI creates virtual “processes” in this case. So, if you have a single CPU single-core machine, you can still use MPI but (yes, you can run multi-process jobs on a single-cpu single-core machine...)

Can MPI run on shared memory?

MPI does not offer shared program memory for ALL processes.

How does a multicore microcontroller work?

A multicore microcontroller has two or more processors inside. It is multiprocessing when it executes several tasks at once, with each task using its own processor. This is also referred to as true multitasking.

What is MPI processor?

The message passing interface (MPI) is a standardized means of exchanging messages between multiple computers running a parallel program across distributed memory. In parallel computing, multiple computers – or even multiple processor cores within the same computer – are called nodes.


2 Answers

Simple answer:

All methods should lead to the same performance. You'll have 8 processes running on the cores and using shared memory.

Technical answer:

"fork" has the advantage of compatibility, on systems where rsh/ssh process spawning would be a problem. But can, I guess, only start processes locally.

At the end (unless MPI is weirdly configured) all processes on the same CPU will end up using "shared memory", and the launcher or the host specification method should not matter for this. The communication method is handled by another parameter (-channel ?).

Specific syntax of host specification method can permit to bind processes to a specific CPU core, then you might have slightly better/worse performance depending of your application.

like image 86
Blklight Avatar answered Sep 22 '22 18:09

Blklight


If you've got everything set up correctly then I don't see that your program's behaviour will depend on how you launch it, unless that is it fails to launch under one or other of the options. (Which would mean that you didn't have everything set up correctly in the first place.)

If memory serves me well the way in which message passing is implemented depends on the MPI device(s) you use. It used to be that you would use the mpi ch_shmem device. This managed the passing of messages between processes but it did use buffer space and messages were sent to and from this space. So message passing was done, but at memory bus speed.

I write in the past tense because it's a while since I was that close to the hardware that I knew (or, frankly, cared) about low-level implementation details and more modern MPI installations might be a bit, or a lot, more sophisticated. I'll be surprised, and pleased, to learn that any modern MPI installation does, in fact, replace message-passing with shared memory read/write on a multicore/multiprocessor machine. I'll be surprised because it would require translating message-passing into shared memory access and I'm not sure that that is easy (or easy enough to be feasible) for the whole of MPI. I think it's far more likely that current implementations still rely on message-passing across the memory bus through some buffer area. But, as I state, that's only my best guess and I'm often wrong on these matters.

like image 22
High Performance Mark Avatar answered Sep 23 '22 18:09

High Performance Mark