I'm not sure when I have to use different numbers for the tag field in MPI send, receive calls. I've read this, but I can't understand it.
Sometimes there are cases when A might have to send many different types of messages to B. Instead of B having to go through extra measures to differentiate all these messages, MPI allows senders and receivers to also specify message IDs with the message (known as tags). When process B only requests a message with a certain tag number, messages with different tags will be buffered by the network until B is ready for them.
Do I have to use tags, for example, when I have multiple calls "isend" (with different tags) from process A and only 1 call to "ireceive" in process B?
MPI. The message tag is used to differentiate messages, in case rank A has sent multiple pieces of data to rank B. When rank B requests for a message with the correct tag, the data buffer will be overwritten by that message. The communicator is something we have seen before.
The MPI_Send and MPI_Recv functions utilize MPI Datatypes as a means to specify the structure of a message at a higher level. For example, if the process wishes to send one integer to another, it would use a count of one and a datatype of MPI_INT .
Deadlock or race conditions occur when the message passing cannot be completed. Consider the following and assume that the MPI_Send does not complete until the correspondingMPI_Recv is posted and visa versa. The MPI_Send commands will never be completed and the program will deadlock.
Definition. MPI_Probe obtains information about a message waiting for reception, without actually receiving it. In other words, the message probed remains waiting for reception afterwards. This approach allows for instance to receive messages of unknown length by probing them to get their length first.
Message tags are optional. You can use arbitrary integer values for them and use whichever semantics you like and seem useful to you.
Like you suggested, tags can be used to differentiate between messages that consist of different types (MPI_INTEGER
, MPI_REAL
, MPI_BYTE
, etc.). You could also use tags to add some information about what the data actually represents (if you have an n
xn
matrix, a message to send a row of this matrix will consist of n
values, as will a message to send a column of that matrix; nevertheless, you may want to treat row and column data differently).
Note that the receive operation has to match the tag of a message it wants to receive. This, however, does not mean that you have to specify the same tag, you can also use the wildcard MPI_ANY_TAG
as message tag; the receive operation will then match arbitrary message tags. You can find out which tag the sender used with the help of MPI_Probe
.
In general, I tend to avoid them. There is no requirement that you use tags. If you need to get the message size before parsing the message, you can use MPI_Probe
. That way you can send different messages rather than specifying Tags. I typically use tags because MPI_Recv
requires that you know the message size before fetching the data. If you have different sizes and types, tags can help you differentiate between them by having multiple threads or processes listening over a different subset. Tag 1 can mean messages of type X and tag 2 will be messages of type Y. Also, it enables you to have multiple "channels" of communication without having to do the work of creating unique communicators and groups.
#include <mpi.h> #include <iostream> using namespace std; int main( int argc, char* argv[] ) { // Init MPI MPI_Init( &argc, &argv); // Get the rank and size int rank, size; MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); // If Master if( rank == 0 ){ char* message_r1 = "Hello Rank 1"; char* message_r2 = "Hello Rank 2"; // Send a message over tag 0 MPI_Send( message_r1, 13, MPI_CHAR, 1, 0, MPI_COMM_WORLD ); // Send a message over tag 1 MPI_Send( message_r2, 13, MPI_CHAR, 2, 1, MPI_COMM_WORLD ); } else{ // Buffer char buffer[256]; MPI_Status status; // Wait for your own message MPI_Recv( buffer, 13, MPI_CHAR, 0, rank-1, MPI_COMM_WORLD, &status ); cout << "Rank: " << rank << ", Message: " << buffer << std::endl; } // Finalize MPI MPI_Finalize(); }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With