Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Abstract implementation of non-blocking MPI calls

Tags:

mpi

Non-blocking sends/recvs return immediately in MPI and the operation is completed in the background. The only way I see that happening is that the current process/thread invokes/creates another process/thread and loads an image of the send/recv code into that and itself returns. Then this new process/thread completes this operation and sets a flag somewhere which the Wait/Test returns. Am I correct ?

like image 622
Gaurav Saxena Avatar asked Jan 12 '14 07:01

Gaurav Saxena


People also ask

What is non-blocking in MPI?

Blocking and non-blocking sends. Blocking: return only when the buffer is ready to be reused. Non-blocking: return immediately. Buffering: data is kept until it is received.

What is MPI blocking?

The MPI standard requires that a blocking send call blocks (and hence NOT return to the call) until the send buffer is safe to be reused. Similarly, the Standard requires that a blocking receive call blocks until the receive buffer actually contains the intended message.

Which form of communication has both blocking and non-blocking variant *?

MPI provides a variety of send and receive calls in its interface. These calls can be classified into one of two groups: those that cause a task to pause while it waits for messages to be sent and received, and those that do not. Blocking and non-blocking point-to-point communication calls defined by MPI.


1 Answers

There are two ways that progress can happen:

  1. In a separate thread. This is usually an option in most MPI implementations (usually at configure/compile time). In this version, as you speculated, the MPI implementation has another thread that runs a separate progress engine. That thread manages all of the MPI messages and sending/receiving data. This way works well if you're not using all of the cores on your machine as it makes progress in the background without adding overhead to your other MPI calls.

  2. Inside other MPI calls. This is the more common way of doing things and is the default for most implementations I believe. In this version, non-blocking calls are started when you initiate the call (MPI_I<something>) and are essentially added to an internal queue. Nothing (probably) happens on that call until you make another call to MPI later that actually does some blocking communication (or waits for the completion of previous non-blocking calls). When you enter that future MPI call, in addition to doing whatever you asked it to do, it will run the progress engine (the same thing that's running in a thread in version #1). Depending on what the MPI call that's supposed to be happening is doing, the progress engine may run for a while or may just run through once. For instance, if you called MPI_WAIT on an MPI_IRECV, you'll stay inside the progress engine until you receive the message that you're waiting for. If you are just doing an MPI_TEST, it might just cycle through the progress engine once and then jump back out.

  3. More exotic methods. As Jeff mentions in his post, there are more exotic methods that depend on the hardware on which you're running. You may have a NIC that will do some magic for you in terms of moving your messages in the background or some other way to speed up your MPI calls. In general, these are very specific to the implementation and hardware on which you're running, so if you want to know more about them, you'll need to be more specific in your question.

All of this is specific to your implementation, but most of them work in some way similar to this.

like image 148
Wesley Bland Avatar answered Oct 08 '22 08:10

Wesley Bland