I have been testing how exactly mpi works with the following code
#include <iostream>
#include <mpi.h>
using namespace std;
int main(int argc, char *argv[]){
r = 3.0;
int id;
int p;
int a[100];
for(int i=0;i<100;++i){a[i]=i+5; }
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Comm_size(MPI_COMM_WORLD, &p);
cout<<id<<" "<<r<<" "<<a[id]<<endl;
MPI_Finalize();
cout<< "Hello world " <<endl;
return 0;
}
I am using 30 cores to run the code. but the output is somehow surprising on 2 aspects,
a[i]
, so does this mean that I can initialize a variable or an array before calling MPI_Init() and all processes will share the same value for that variable?By the way, I am using mpicc for compiling the code.
It's perfectly valid to execute code before MPI_Init
and after MPI_Finalize
. Of course you are not allowed to use MPI in that code, but otherwise it's just normal C++.
MPI_Init
and MPI_Finalize
are just library calls, they are not supposed to change control flow or remove values assigned prior to init (and how would that be possible at all?).
MPI_Init
doesn't cancel initialization of variables.
The question you're referring to is saying just that initialization in process 0 doesn't initialize values in other processes. In that question, data were read from file which probably existed only for process 0.
Note that MPI_Init
doesn't create new process. It's not like fork
. All the processes are created before your program starts (most likely by mpirun
). And in your case, each process initializes its array.
MPI_Finalize
doesn't terminate the process. It only shuts down the MPI library. Processes still continue running after that, although they cannot interact any more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With