Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MPI implementation: Can MPI_Recv receive messages from many MPI_Send?

Now I am trying to use MPI_Sendand MPI_Recv to pass best found solutions among several processes. The best solutions found in each process are supposed to pass to the control process what stores all best solutions and send to other processes when required. My question is how to implement it? For example, once process 1 find a new best, it can call MPI_Send and send it to the control process. Is there a way for the control process to detect there is message to receive? Does each MPI_Send require a MPI_Recv? Looking forward to hearing advice from you experts. Thanks!

Thanks for your advice. What I am thinking to do is to let several working processes send messages to one control process. The working processes decide when to send. The control process has to detect when to receive. Can MPI_Proble do this?

like image 857
Jackie Avatar asked Feb 25 '10 17:02

Jackie


3 Answers

Yes, MPI_RECV can specify MPI_ANY_SOURCE as the rank of the source of a message so you should be able to do what you want.

like image 69
High Performance Mark Avatar answered Oct 19 '22 05:10

High Performance Mark


MPI_Recv can use MPI_ANY_SOURCE as a way to receive a message from any other rank.

Depending on the workload and nature of the control process, you may want to retain control in your code, and only enter the MPI library from time to time. In that case MPI_IRecv on MPI_ANY_SOURCE and MPI_Test might be a good way to proceed.

If there is some processing you need to do based on the contents of the message, MPI_Probe or MPI_IProbe allow inspection of the message header BEFORE the message is actually MPI_Recv'd. For instance, MPI_Probe allows the size of the message to be determined, and an appropriately sized buffer to be created.

In addition, if all the working ranks will occasionally reach a "barrier" point when best solutions should be checked, a MPI_Gather / MPI_Bcast collective operation might also be appropriate.

Keep in mind that ranks that enter into long computational phases sometimes interfere with good message propagation. If there is an extended computational phase, it can be helpful to ensure that all MPI messages have been delivered prior to that phase. This becomes more important is there is an RDMA style interconnect that is being used in the cluster. MPI_Barrier will ensure that all ranks enter the MPI_Barrier before any MPI ranks can return from the MPI_Barrier call.

like image 25
Stan Graves Avatar answered Oct 19 '22 05:10

Stan Graves


Have a look at MPI_Probe.

like image 3
Phil Miller Avatar answered Oct 19 '22 04:10

Phil Miller