Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can someone suggest a good way to understand how MPI works?

Tags:

mpi

Can someone suggest a good way to understand how MPI works?

like image 215
mynameisalon Avatar asked Jan 16 '11 21:01

mynameisalon


People also ask

How does MPI work?

MPI assigns an integer to each process beginning with 0 for the parent process and incrementing each time a new process is created. A process ID is also called its "rank". MPI also provides routines that let the process determine its process ID, as well as the number of processes that are have been created.

Why do we need MPI?

Message Passing Interface (MPI) is a communication protocol for parallel programming. MPI is specifically used to allow applications to run in parallel across a number of separate computers connected by a network.

How is MPI implemented?

As noted in the preceding section, MPICH collective operations are implemented on top of MPICH point-to-point operations. MPICH collective operations retrieve the hidden communicator from the communicator passed in the argument list and then use standard MPI point-to-point calls with this hidden communicator.

What is MPI what are its main characteristics?

MPI "is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation." MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today.


1 Answers

If you are familiar with threads, then you treat each node as a thread (to an extend)

You send a message (work) to a node and it does some work and then returns you some results.

Similar behaviors between thread & MPI:

They all involve partitioning a work and process it separately.

They all would have overhead when more node/threads involved, MPI overhead is more significant compared to thread, passing messages around nodes would cause significant overhead if work is not carefully partitioned, you might end up with the time passing messages > computational time required to process job.

Difference behaviors:

They have different memory models, each MPI node does not share memory with others and does not know anything about the rest of world unless you send something to it.

like image 171
Yuan Avatar answered Nov 15 '22 09:11

Yuan