Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What are "source" and "destination" parameters in MPI_Cart_shift?

Here it is written that the output parameters of MPI_Cart_shift are ranks of the source and destination processes. However, in this tutorial (code below) what is returned as the source process is later used in MPI_Isend to send messages. Anyone can clear it up - what actually "source" and "destination" mean?

#include "mpi.h"
#include <stdio.h>
#define SIZE 16
#define UP    0
#define DOWN  1
#define LEFT  2
#define RIGHT 3

int main(argc,argv)
int argc;
char *argv[];  {
int numtasks, rank, source, dest, outbuf, i, tag=1, 
   inbuf[4]={MPI_PROC_NULL,MPI_PROC_NULL,MPI_PROC_NULL,MPI_PROC_NULL,}, 
   nbrs[4], dims[2]={4,4}, 
   periods[2]={0,0}, reorder=0, coords[2];

MPI_Request reqs[8];
MPI_Status stats[8];
MPI_Comm cartcomm;

MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);

if (numtasks == SIZE) {
  MPI_Cart_create(MPI_COMM_WORLD, 2, dims, periods, reorder, &cartcomm);
  MPI_Comm_rank(cartcomm, &rank);
  MPI_Cart_coords(cartcomm, rank, 2, coords);
  MPI_Cart_shift(cartcomm, 0, 1, &nbrs[UP], &nbrs[DOWN]);
  MPI_Cart_shift(cartcomm, 1, 1, &nbrs[LEFT], &nbrs[RIGHT]);

  printf("rank= %d coords= %d %d  neighbors(u,d,l,r)= %d %d %d %d\n",
         rank,coords[0],coords[1],nbrs[UP],nbrs[DOWN],nbrs[LEFT],
         nbrs[RIGHT]);

  outbuf = rank;

  for (i=0; i<4; i++) {
     dest = nbrs[i];
     source = nbrs[i];
     MPI_Isend(&outbuf, 1, MPI_INT, dest, tag, 
               MPI_COMM_WORLD, &reqs[i]);
     MPI_Irecv(&inbuf[i], 1, MPI_INT, source, tag, 
               MPI_COMM_WORLD, &reqs[i+4]);
     }

  MPI_Waitall(8, reqs, stats);

  printf("rank= %d                  inbuf(u,d,l,r)= %d %d %d %d\n",
         rank,inbuf[UP],inbuf[DOWN],inbuf[LEFT],inbuf[RIGHT]);  }
else
  printf("Must specify %d processors. Terminating.\n",SIZE);

MPI_Finalize();
}
like image 227
pmichna Avatar asked Dec 28 '13 09:12

pmichna


People also ask

What is MPI_Cart_shift?

Loosely speaking, MPI_Cart_shift is used to find two “nearby” neighbors of the calling process along a specific direction of an N-dimensional cartesian topology. This direction is specified by the input argument, direction , to MPI_Cart_shift .

What is Mpi_cart_create?

MPI_CART_CREATE returns a handle to a new communicator to which the cartesian topology information is attached. If reorder = false then the rank of each process in the new group is identical to its rank in the old group.


2 Answers

MPI_Cart_shift: Returns the shifted source and destination ranks, given a shift direction and amount

int MPI_Cart_shift(MPI_Comm comm, int direction, int displ, int *source, int *dest)

What you hand in to the function is comm, direction and displ. Where direction specifies the dimension in which the displacement is taken. The displacement is the distance.

Example

Imagine a 2D cart topology like this (names are not ranks but process-names, only for explanation):

A1  A2  A3  A4  A5
B1  B2  B3  B4  B5
C1  C2  C3  C4  C5
D1  D2  D3  D4  D5
E1  E2  E3  E4  E5

As you might already have understood you are writing SPMD-Code in MPI, therefore we can now pick, w.l.o.g., one process to show what is happening. Let's pick C3

The general idea of MPI_Cart_shift is that we get the rank of a specified process in our topology.

First, we have to decide in which direction we want to go, let's pick 0, which is the column dimension. Then we have to specify a distance to the other process, let's say this is 2.

So the call would be like:

MPI_Cart_shift(cartcomm, 0, 2, &source, &dest);

Now, the ranks which are placed into the source and dest variables are those respectively of the processes A3 and E3.

How to interpret the results

I (C3) want to send data to the process in the same column with a distance of 2. So this is the dest rank.

If you do the same from the viewpoint of A3: process A3 gets as its dest field the rank of C3.

And this is what source says: what is the rank of the process which is sending me those data if it calls the same MPI_Cart_shift.

If there is no process at the specified place the variable contains MPI_PROC_NULL. So the results of the call at each process would look like this (with source|dest for each process, using - for MPI_PROC_NULL):

MPI_Cart_shift(cartcomm, 0, 2, &source, &dest);

 A1      A2      A3      A4      A5
-|C1    -|C2    -|C3    -|C4    -|C5

 B1      B2      B3      B4      B5
-|D1    -|D2    -|D3    -|D4    -|D5

 C1      C2      C3      C4      C5
A1|E1   A2|E2   A3|E3   A4|E4   A5|E5

 D1      D2      D3      D4      D5
B1|-    B2|-    B3|-    B4|-    B5|-

 E1      E2      E3      E4      E5
C1|-    C2|-    C3|-    C4|-    C5|-

Additional bit of information

If you create the cart with any dimension set periods = 1 then there is a virtual edge between the first and the last node of the cart. In this example, periods[0] = 1 would make a connection between A1 and E1, between A2 and E2, and so on. If you then call the MPI_Cart_shift, the counting has to be wrapped around the corners so your output would be:

 A1      A2      A3      A4      A5
D1|C1   D2|C2   D3|C3   D4|C4   D5|C5

 B1      B2      B3      B4      B5
E1|D1   E2|D2   E3|D3   E4|D4   E5|D5

 C1      C2      C3      C4      C5
A1|E1   A2|E2   A3|E3   A4|E4   A5|E5

 D1      D2      D3      D4      D5
B1|A1   B2|A2   B3|A3   B4|A4   B5|A5

 E1      E2      E3      E4      E5
C1|B1   C2|B2   C3|B3   C4|B4   C5|B5
like image 112
Henkersmann Avatar answered Dec 07 '22 11:12

Henkersmann


MPI_Cart_shift is a convenience function. It's primary usage is for data shifts, i.e. operations in which each rank sends data in a certain direction (i.e. to destination) and receives data from the opposite direction (i.e. from source) (forward operation). When source is used as destination and destination as source, data flows in the opposite direction (backward operation). An example of such operation is the halo swapping and it usually requires two shifts along each dimension - one forward and one backward.

MPI_Cart_shift is a convenience function since its action is equivalent to the following set of MPI calls:

// 1. Determine the rank of the current process
int rank;
MPI_Comm_rank(cartcomm, &rank);

// 2. Transform the rank into topology coordinates
int coords[ndims];
MPI_Cart_coords(cartcomm, rank, ndims, coords);

// 3. Save the current coordinate along the given direction
int saved_coord = coords[direction];

// 4. Compute the "+"-shifted position and convert to rank
coords[direction] = saved_coord + displ;
// Adjust for periodic boundary if necessary
if (periods[direction])
   coords[direction] %= dims[direction];

// 5. Convert to rank
MPI_Cart_rank(cartcomm, coords, &destination);

// 6. Compute the "-"-shifted position and convert to rank
coords[direction] = saved_coord - displ;
// Adjust for periodic boundary
if (periods[direction])
   coords[direction] %= dims[direction];

// 7. Convert to rank
MPI_Cart_rank(cartcomm, coords, &source);

One could also compute the rank<->coordinate transforms using arithmetic without calls to MPI_Cart_rank or MPI_Cart_coords but it would be very inflexible as the formulas change when the dimensionality of the topology changes.

Something very important. The ranks as computed by MPI_Cart_shift (or by the equivalent code above) are related to the cartcomm communicator. Those match the ranks in the original communicator (the one used in MPI_Cart_create) only if reorder = 0. When reordering is allowed, the ranks could differ and therefore one should not use those ranks within the context of the original communicator. The following code of yours is valid but strongly dependent on the fact that reorder = 0 in the call to MPI_Cart_create:

dest = nbrs[i];
source = nbrs[i];
MPI_Isend(&outbuf, 1, MPI_INT, dest, tag, 
          MPI_COMM_WORLD, &reqs[i]);
MPI_Irecv(&inbuf[i], 1, MPI_INT, source, tag, 
          MPI_COMM_WORLD, &reqs[i+4]);

Here nbrs are computed within cartcomm and then used within MPI_COMM_WORLD. The correct code should use cartcomm in both communication calls:

MPI_Isend(&outbuf, 1, MPI_INT, dest, tag, 
          cartcomm, &reqs[i]);
MPI_Irecv(&inbuf[i], 1, MPI_INT, source, tag, 
          cartcomm, &reqs[i+4]);

Some algorithms require that data travels the other way round, i.e. forward and backward are swapped. For such algorithms the displacement displ specified could be negative. In general, a call to MPI_Cart_shift with negative displacement is equivalent to a call with positive displacement but source and destination swapped.

like image 32
Hristo Iliev Avatar answered Dec 07 '22 11:12

Hristo Iliev