This question is related to this question: MPI and D: Linker Options
I am trying to get MPI working from D. There are several posts to be found in the net, but none I found did actually work... So here is what I did so far:
I took the mpi.d from here https://github.com/1100110/OpenMPI/blob/master/mpi.d and set up a minimal program:
import mpi;
import std.stdio;
void* MPI_COMM_WORLD = cast(void*)0;
int main(string[] args)
{
int rank, size;
int argc = cast(int)args.length;
char *** argv = cast(char***)&args;
MPI_Init (&argc, argv); /* starts MPI */
MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process id */
MPI_Comm_size (MPI_COMM_WORLD, &size); /* get number of processes */
writefln( "Hello world from process %d of %d", rank, size );
MPI_Finalize();
return 0;
}
I compile with
dmd test_mpi.d -L-L/usr/lib/openmpi -L-lmpi -L-ldl -L-lhwloc
or
gdc test_mpi.d -pthread -L/usr/lib/openmpi -lmpi -ldl -lhwloc -o test_mpi
and run with
mpirun -n 2 ./test_mpi
This is the error I get:
[box:1871] *** An error occurred in MPI_Comm_rank
[box:1871] *** on communicator MPI_COMM_WORLD
[box:1871] *** MPI_ERR_COMM: invalid communicator
[box:1871] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 1870 on
node bermuda-iii exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[box:01869] 1 more process has sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[box:01869] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Obviously I do call MPI_Init and MPI_Finalize. So what am I missing?
In Open MPI C communicator handles are pointers to the real communicator structures. MPI_COMM_WORLD
is a pointer to the precreated world communicator structure and not a NULL pointer as you define it. That's why Open MPI aborts in the call to MPI_COMM_RANK
- it is equivalent to calling MPI_Comm_rank(NULL, &rank)
in C.
If you take a look at line 808 of mpi.d
, you would notice that MPI_COMM_WORLD
is already defined as:
MPI_COMM_WORLD = cast(void*) &(ompi_mpi_comm_world),
So your code should work if you remove the line where you redefine MPI_COMM_WORLD
.
You are not casting string[]
to char***
right. You should do this instead:
import std.string, std.algorithm, std.array;
char** argv = cast(char**)map!(toStringz)(args).array.ptr;
MPI_Init (&argc, &argv);
Here's how it works:
Maps toStringz
on each args
element.
Because map returns range, we use array
to have array of it.
Getting array pointer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With