Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Asynchronous MPI with SysV shared memory

We have a large Fortran/MPI code-base which makes use of system-V shared memory segments on a node. We run on fat nodes with 32 processors, but only 2 or 4 NICs, and relatively little memory per CPU; so the idea is that we set up a shared memory segment, on which each CPU performs its calculation (in its block of the SMP array). MPI is then used to handle inter-node communications, but only on the master in the SMP group. The procedure is double-buffered, and has worked nicely for us.

The problem came when we decided to switch to asynchronous comms, for a bit of latency hiding. Since only a couple of CPUs on the node communicate over MPI, but all of the CPUs see the received array (via shared memory), a CPU doesn't know when the communicating CPU has finished, unless we enact some kind of barrier, and then why do asynchronous comms?

The ideal, hypothetical solution would be to put the request tags in an SMP segment and run mpi_request_get_status on the CPU which needs to know. Of course, the request tag is only registered on the communicating CPU, so it doesn't work! Another proposed possibility was to branch a thread off on the communicating thread and use it to run mpi_request_get_status in a loop, with the flag argument in a shared memory segment, so all the other images can see. Unfortunately, that's not an option either, since we are constrained not to use threading libraries.

The only viable option we've come up with seems to work, but feels like a dirty hack. We put an impossible value in the upper-bound address of the receive buffer, that way once the mpi_irecv has completed, the value has changed and hence every CPU knows when it can safely use the buffer. Is that ok? It seems that it would only work reliably if the MPI implementation can be guaranteed to transfer data consecutively. That almost sounds convincing, since we've written this thing in Fortran and so our arrays are contiguous; I would imagine that the access would be also.

Any thoughts?

Thanks, Joly

Here's a pseudo-code template of the kind of thing I'm doing. Haven't got the code as a reference at home, so I hope I haven't forgotten anything crucial, but I'll make sure when I'm back to the office...

pseudo(array_arg1(:,:), array_arg2(:,:)...)

  integer,      parameter : num_buffers=2
  Complex64bit, smp       : buffer(:,:,num_buffers)
  integer                 : prev_node, next_node
  integer                 : send_tag(num_buffers), recv_tag(num_buffers)
  integer                 : current, next
  integer                 : num_nodes

  boolean                 : do_comms
  boolean,      smp       : safe(num_buffers)
  boolean,      smp       : calc_complete(num_cores_on_node,num_buffers)

  allocate_arrays(...)

  work_out_neighbours(prev_node,next_node)

  am_i_a_slave(do_comms)

  setup_ipc(buffer,...)

  setup_ipc(safe,...)

  setup_ipc(calc_complete,...)

  current = 1
  next = mod(current,num_buffers)+1

  safe=true

  calc_complete=false

  work_out_num_nodes_in_ring(num_nodes)

  do i=1,num_nodes

    if(do_comms)
      check_all_tags_and_set_safe_flags(send_tag, recv_tag, safe) # just in case anything else has finished.
      check_tags_and_wait_if_need_be(current, send_tag, recv_tag)
      safe(current)=true
    else
      wait_until_true(safe(current))
    end if

    calc_complete(my_rank,current)=false
    calc_complete(my_rank,current)=calculate_stuff(array_arg1,array_arg2..., buffer(current), bounds_on_process)
    if(not calc_complete(my_rank,current)) error("fail!")

    if(do_comms)
      check_all_tags_and_set_safe(send_tag, recv_tag, safe)

      check_tags_and_wait_if_need_be(next, send_tag, recv_tag)
      recv(prev_node, buffer(next), recv_tag(next))
      safe(next)=false

      wait_until_true(all(calc_complete(:,current)))
      check_tags_and_wait_if_need_be(current, send_tag, recv_tag)
      send(next_node, buffer(current), send_tag(current))
      safe(current)=false
    end if

    work_out_new_bounds()

    current=next
    next=mod(next,num_buffers)+1

  end do
end pseudo

So ideally, I would have liked to have run "check_all_tags_and_set_safe_flags" in a loop in another thread on the communicating process, or even better: do away with "safe flags" and make the handle to the sends / receives available on the slaves, then I could run: "check_tags_and_wait_if_need_be(current, send_tag, recv_tag)" (mpi_wait) before the calculation on the slaves instead of "wait_until_true(safe(current))".

like image 794
holyjoly Avatar asked May 16 '12 22:05

holyjoly


1 Answers

"...unless we enact some kind of barrier, and then why do asynchronous comms?"

That sentence is a bit confused. The purpose of asynchrononous communications is to overlap communications and computations; that you can hopefully get some real work done while the communications is going on. But this means you now have two tasks occuring which eventually have to be synchronized, so there has to be something which blocks the tasks at the end of the first communications phase before they go onto the second computation phase (or whatever).

The question of what to do in this case to implement things nicely (it seems like what you've got now works but you're rightly concerned about the fragility of the result) depends on how you're doing the implementation. You use the word threads, but (a) you're using sysv shared memory segments, which you wouldn't need to do if you had threads, and (b) you're constrained not to be using threading libraries, so presumably you actually mean you're fork()ing processes after MPI_Init() or something?

I agree with Hristo that your best bet is almost certainly to use OpenMP for on-node distribution of computation, and would probably greatly simplify your code. It would help to know more about your constraint to not use threading libraries.

Another approach which would still avoid you having to "roll your own" process-based communication layer that you use in addition to MPI would be to have all the processes on the node be MPI processes, but create a few communicators - one to do the global communications, and one "local" communicator per node. Only a couple of processes per node would be a part of a communicator which actually does off-node communications, and the others do work on the shared memory segment. Then you could use MPI-based methods for synchronization (Wait, or Barrier) for the on-node synchronization. The upcoming MPI3 will actually have some explicit support for using local shared memory segments this way.

Finally, if you're absolutely bound and determined to keep doing things through what's essentially your own local-node-only IPC implementation --- since you're already using SysV shared memory segments, you might as well use SysV semaphores to do the synchronization. You're already using your own (somewhat delicate) semaphore-like mechanism to "flag" when the data is ready for computation; here you could use a more robust, already-written semaphore to let the non-MPI processes know when the data is ready for computation (and a similar mechanism to let the MPI process know when the others are done with the computation).

like image 182
Jonathan Dursi Avatar answered Sep 23 '22 13:09

Jonathan Dursi