I'm writing a light wrapper in C++ for MPI. To make things easier, I have some functions return an MPI_Request object rather than take one as a pointer. The code works fine on my computer, although I'm concerned it could cause problems with a different implementation of MPI.
Below is some example code:
template<class T> MPI_Request ireceive(T* data, int count, int source, int tag)
{
MPI_Request request;
MPI_Irecv(data, get_mpi_type<T>::mul * count, get_mpi_type<T>::type(), source, tag, MPI_COMM_WORLD, &request);
return request;
}
template<class T> MPI_Request ireceive(std::vector<T>& dest, int source, int tag)
{
MPI_Status status = probe(source, tag);
int size = get_msg_size<T>(status);
dest.clear();
dest.resize(size);
return ireceive(&dest[0], size, source, tag, status);
}
MPI_Status wait(MPI_Request& request)
{
MPI_Status status;
MPI_Wait(&request, &status);
return status;
}
MPI_Status test(MPI_Request& request, int& flag)
{
MPI_Status status;
MPI_Test(&request, &flag, &status);
return status;
}
The first two functions directly return a MPI_Request object, and the second two functions return an MPI_Status object. I am concerned that, with another MPI implementation, these functions could cause undefined behavior.
Is it dangerous to return MPI_Request and MPI_Status objects by value?
MPI_Status structure is basically an array of integers that reports the results of a particular communication, be it a point to point receive or an non-blocking operation. Copying this structure is not dangerous, and you can do it using either an assignment or memcpy. The performance penalty of copying this should not be a problem, as the typical size of these elements is around 20 bytes (MPI 3.1).
On the other hand, MPI_Request is not trivial. I also made some MPI wrappers in C++ for a project (because C++ API is deprecated and not consistent across implementations). MPI_Request structures are basically handles (think of it like a pointer) that are used to check the completion status of an operation. If the handle is released (because a non-blocking operation has finished, or because you called either MPI_Request_free or MPI_Cancel, then the object is no longer valid and MPI_Request handle is set to MPI_REQUEST_NULL.
In my experience, MPI_Request wrapper classes are better made non-copyable. You can still share them using references or pointers. This will save you some debugging time.
See MPI standard document 3.1, section 3.7 Non-blocking communication
A call to
MPI_TESTreturnsflag = trueif the operation identified by request is complete. In such a case, the status object is set to contain information on the completed operation. If the request is an active persistent request, it is marked as inactive. Any other type of request is deallocated and the request handle is set toMPI_REQUEST_NULL. [...]One is allowed to call
MPI_TESTwith a null or inactive request argument. In such a case the operation returns withflag = trueand empty status.
Note: null or inactive request is not an invalid request handle, which for example is the result of releasing MPI_Request and setting only one of these copies to MPI_REQUEST_NULL.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With