I like InfiniBand promise of 40Gbit/s network. My needs do not map onto the MPI model with one core node + slaves, and if possible I would prefer not to use MPI at all. I need simple connect/send/receive/close (or its async versions) API. Yet reading MS Azure docs nor in Microsoft HPC Pack docs I cant find any API for C/C++ or .Net that would allow to use InfiniBand as transport for my application. So my question is simple how to use InfiniBand to connect to other nodes and send data packets to them and receive on other end? (Alike some Socket API or anything like that)
ND-SPI on Azure or DAPL-ND on Azure connect/send/receive/close tutorial is what I am looking for.
I agree with Hristo's comment that it'll be MUCH easier to use a higher level API's that MPI provide, rather than a "native" IB library.
And just to clarify, MPI does not impose Master-Slave. Once all the processes are up and have a communicator, you have all the flexibility in the world. Anybody can send data to anybody. And with MPI 2.0 you have one-sided communication, where one worker can essentially reach into another's memory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With