Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow Horovod: NCCL and MPI

Horovod is combining NCCL and MPI into an wrapper for Distributed Deep Learning in for example TensorFlow. I haven't heard of NCCL previously and was looking into its functionality. The following is stated about NCCL on the NVIDIA website:

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs.

From the introduction video about NCCL I understood that NCCL works via PCIe, NVLink, Native Infiniband, Ethernet and it can even detect if GPU Direct via RDMA makes sense in the current hardware topology and uses it transparently.

So I am questioning why MPI is needed in Horovod? As far as I understand, MPI is also used for efficiently exchanging the gradients among distributed nodes via an allreduce paradigm. But as I understand, NCCL already supports those functionalities.

So is MPI only used for easily scheduling the jobs on a cluster? For Distributed Deep Learning on CPU, since we cannot use NCCL there?

I would highly appreciate if someone could explain in which scenarios MPI and/or NCCL is used for Distributed Deep Learning and what are their responsibilities during the training job.

like image 553
Alex Avatar asked Nov 27 '18 11:11

Alex


People also ask

Does Horovod use MPI?

horovodrun introduces a convenient, Open MPI-based wrapper for running Horovod scripts.

What is MPI TensorFlow?

MPI is a communications protocol that allows distributed tasks to be run. This makes it a great tool for performing distributed deep learning tasks. cnvrg has implemented MPI into the platform, so you can leverage the power of MPI without any of the DevOps and MLOps complexity.

What is Horovod TensorFlow?

Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. Horovod was originally developed by Uber to make distributed deep learning fast and easy to use, bringing model training time down from days and weeks to hours and minutes.

What is Horovodrun?

horovodrun automatically converts some parameters to the format supported by Intel(R) MPI mpirun . The set of allowed options includes -np , -H and ssh arguments (-p, -i). Intel(R) MPI mpirun does not support MCA parameters, but you can set some of the options via environment variables.


2 Answers

MPI (Message Passing Interface) is a message-passing standard used in parallel computing (Wikipedia). Most of the time, you'd use Open MPI when using Horovod, which is an open-source implementation of the MPI standard.

The MPI implementation allows one to easily run more than a single instance of a program in parallel. The program code is kept the same but just running in a few different processes. In addition, the MPI library exposes an API to easily share data and information among these processes.

Horovod uses this mechanism in order to run some processes of the Python script which is running the neural network. These processes should know and share some information during the running of the neural network. Some of this information is about the environment, for example:

  1. The number of processes that are currently being running, for being able to correctly modify parameters and hyperparameters for the neural network such as the batch size, learning rate, etc..
  2. Knowing which process is the "master" one, to print logs and save files (checkpoints) from only a single process.
  3. The id (called "rank") of the current process so it could use a specific area of the input data.

Some of this information is about the training process of the neural network, for example:

  1. The randomized initial values for the weights and biases of the model, so all processes will start from the same point.
  2. The values of the weights and biases at the end of every training step, so all processes will start the next step with the same values.

There is more information that is shared and the above bullets are some of it.

At first, Horovod used MPI for all the requirements above. Then, Nvidia released NCCL which is a library that consists of many algorithms for high-performance communication between GPUs. To improve the overall performance, Horovod started using NCCL for things like (4) and mainly (5) as NCCL allowed sharing this data between GPUs much more efficiently.

In Nvidia docs we can see that NCCL can be used in conjunction with MPI, and in general:

MPI is used for CPU-CPU communication, and NCCL is used for GPU-GPU communication.

Horovod still uses MPI for running the few instances of the Python script and manage the environment (rank, size, which process is the "master", etc..) for allowing the user to easily manage the run.

like image 187
Raz Rotenberg Avatar answered Oct 02 '22 14:10

Raz Rotenberg


Firstly, horovod used MPI only in the beginning.

After NCCL is introduced to horovod, even in NCCL mode, MPI is still used for providing environmental info (rank, size and local_rank). NCCL doc has an example shows how it leverages MPI in one device per process setting:

The following code is an example of a communicator creation in the context of MPI, using one device per MPI rank.

https://docs.nvidia.com/deeplearning/sdk/nccl-developer-guide/docs/examples.html#example-2-one-device-per-process-or-thread

like image 28
eval Avatar answered Oct 02 '22 14:10

eval