Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is Tensorflow's Between-graph replication an example of data parallelism?

I have read distributed tensorflow documentation and this answer.

According to this, in data parallelism approach:

  • The algorithm distributes the data between various cores.
  • Each core independently tries to estimate the same parameter(s)
  • Cores then exchange their estimate(s) with each other to come up with the right estimate for the step.

And in model parallelism approach:

  • The algorithm sends the same data to all the cores.
  • Each core is responsible for estimating different parameter(s)
  • Cores then exchange their estimate(s) with each other to come up with the right estimate for all the parameters.

How do In-graph replication and Between-graph replication relate to these approaches?

This article says:

For example, different layers in a network may be trained in parallel on different GPUs. This training procedure is commonly known as "model parallelism" (or "in-graph replication" in the TensorFlow documentation).

And:

In "data parallelism" (or “between-graph replication” in the TensorFlow documentation), you use the same model for every device, but train the model in each device using different training samples.

Is that accurate?

From the Tensorflow DevSummit video linked in tensorflow documentation page: enter image description here It looks like data is split and distributed to each worker. So isn't In-graph replication following data parallelism approach?

like image 438
Amila Avatar asked Jun 20 '18 20:06

Amila


1 Answers

In-graph replication and between-graph replication are not directly related with data parallelism and model parallelism. Data parallelism and model parallelism are terms dividing parallelization algorithms into two categories, like described in the quora answer that you link. But in-graph replication and between-graph replication are two ways to implement parallelism in tensorflow. Data parallelism for instance can be implemented with both in-graph replication and between-graph replication.

Like shown in the video, in-graph replication is achieved by assigning different parts of a single graph to different devices. In between-graph replication has multiple graphs running in parallel instead, which is achieved by using distributed tensorflow.

like image 167
BlueSun Avatar answered Nov 02 '22 17:11

BlueSun