I am using GANEstimator with MirroredStrategy to work on multiple GPUs of single instance. input_fn
in my case is tf.data.Dataset
with the following settings:
dataset = dataset.repeat()
dataset = dataset.shuffle(buffer_size=100)
dataset = dataset.batch(self.batch_size, drop_remainder=True)
dataset = dataset.prefetch(100)
The reason why I am asking this is that do I need to specify something like dataset.shard()
manually to have different data being passed to workers? I am digging in the code of Estimator, and MirroredStrategy, but it is unclear to me what is going on. Additional confuse is created from the description of distributed strategies:
MirroredStrategy: This does in-graph replication with synchronous
training on many GPUs on one machine. Essentially, we create copies of all
variables in the model's layers on each device. We then use all-reduce
to combine gradients across the devices before applying them
to the variables to keep them in sync.
CollectiveAllReduceStrategy: This is a version of MirroredStrategy
for multi-worker training.
So does MirroredStratedy use only one worker? I don't understand it. I need to specify batch size equal to capacity of one tower, otherwise I get OOM. Can someone please point me to the code and explain how does such a simple setup work with batches:
def create_dataset():
...
dataset = dataset.repeat()
dataset = dataset.shuffle(buffer_size=100)
dataset = dataset.batch(self.batch_size, drop_remainder=True)
dataset = dataset.prefetch(100)
return dataset
NUM_GPUS = 4
strategy = tf.contrib.distribute.MirroredStrategy(num_gpus=NUM_GPUS)
optimizer = tf.train.RMSPropOptimizer(learning_rate=0.01, use_locking=True)
optimizer_d = tf.train.RMSPropOptimizer(learning_rate=0.01, use_locking=True)
config = tf.estimator.RunConfig(save_checkpoints_steps=100,
save_summary_steps=1, keep_checkpoint_max=50,
train_distribute=strategy)
# I have more hooks here, just simplified to show
def get_hooks_fn(GANTrainOps):
disjoint_train_hook_func = tfgan.get_sequential_train_hooks(
train_steps=tfgan.GANTrainSteps(10, 1)
) # g steps, d steps
disjoint_train_hooks = disjoint_train_hook_func(GANTrainOps)
return [update_hook, summary_hook] + disjoint_train_hooks
# Create GAN estimator.
gan_estimator = tfgan.estimator.GANEstimator(
model_dir = '/data/checkpoints/estimator_model',
generator_fn = generator_fn,
discriminator_fn = discriminator_fn,
generator_loss_fn = generator_loss_fn,
discriminator_loss_fn = discriminator_loss_fn,
generator_optimizer = optimizer,
discriminator_optimizer = optimizer_d,
use_loss_summaries=True,
config=config,
get_hooks_fn=get_hooks_fn)
gan_estimator.train(input_fn=create_dataset, steps=10000)
Thanks!
The code of MirroredStrategy contains:
1) Weird wording:
The multi-worker version of this class maps one replica to one device on a worker. It mirrors all model variables on all replicas. For example, if you have two
worker
s and eachworker
has 4 GPUs, it will create 8 copies of the model variables on these 8 GPUs. Then like in MirroredStrategy(???), each replica performs their computation with their own copy of variables unless in cross-replica model where variable or tensor reduction happens.
2)
auto_shard_dataset: whether to auto-shard the dataset when there are multiple workers.
This parameter is False by default.
EDIT:
So far I found that tf.estimator.train()
after some time points to what seems to be strategy.make_input_fn_iterator()
:
def _get_iterator_from_input_fn(self, input_fn, mode, distribution=None):
if distribution is not None:
iterator = distribution.make_input_fn_iterator(
lambda _: self._call_input_fn(input_fn, mode))
input_hooks = [
estimator_util.DistributedIteratorInitializerHook(iterator)]
else:
result = self._call_input_fn(input_fn, mode)
iterator = result.make_initializable_iterator()
input_hooks = [estimator_util._DatasetInitializerHook(iterator)]
return iterator, input_hooks
make_input_fn_iterator()
But it was removed from the code of MirroredStrategy and is no longer there! I don't understand how it works and where the dataset is actually split.
EDIT2: I can't find line make_input_fn_iterator
in my tensorflow 1.12.0 distribution with grep. Seems like it's totally absent in the code.
Ok, after spending some time investigating github, I found that it is already different from my tf 1.12.0. So, going down in the local files of 1.12.0 gave me:
GANEstimator inherits tf.python.estimator.Estimator
Estimator.init():
# The distribute field contains an instance of DistributionStrategy.
self._train_distribution = self._config.train_distribute
Then the path down is:
tf.contrib.gan.GANEstimator -> tf.python.estimator.Estimator.train() -->
tf.python.estimator.Estimator._train_model(input_fn, hooks, saving_listeners) -->
._train_model_distributed(input_fn, hooks, saving_listeners) -->
._get_iterator_from_input_fn(input_fn, model_fn_lib.ModeKeys.TRAIN, self._train_distribution) -->
distribution.distribute_dataset(lambda: self._call_input_fn(input_fn, mode))
which calls in my case for MirrorredStrategy.distribute_dataset():
def distribute_dataset(self, dataset_fn):
if self._cluster_spec:
return values.MultiWorkerDataset(
partial(self._call_dataset_fn, dataset_fn), self._worker_device_map,
self._prefetch_on_device, self._auto_shard_dataset)
else:
return values.PerDeviceDataset(
self._call_dataset_fn(dataset_fn), self._devices,
self._prefetch_on_device)
tensorflow/python/training/distribute.py
:
def _call_dataset_fn(self, dataset_fn):
result = dataset_fn()
if not isinstance(result, dataset_ops.Dataset):
raise ValueError(
"dataset_fn() must return a tf.data.Dataset when using a "
"DistributionStrategy.")
return result
I assume PerDeviceDataset
is used, so finally I find these two classes in values.py
:
class PerDeviceDataset(object):
"""Like `tf.data.Dataset` split devices, producing `PerDevice` data."""
def __init__(self, dataset, devices, prefetch_on_device=None):
self._devices = devices
# Default to using prefetching in graph mode, unless specified.
# TODO(priyag): Enable prefetching in eager mode.
self._prefetch_on_device = prefetch_on_device
if self._prefetch_on_device is None:
self._prefetch_on_device = not context.executing_eagerly()
assert not (self._prefetch_on_device and context.executing_eagerly()), (
"Prefetching is only supported in graph mode currently")
if self._prefetch_on_device:
self._dataset = dataset.apply(
prefetching_ops_v2.prefetch_to_devices(self._devices))
else:
# TODO(priyag): If dropping remainder is not appropriate, find another
# approach to distributing the dataset when not possible to divide evenly.
# Possibly not an issue when we start using PartitionedDataset.
self._dataset = dataset.batch(len(devices), drop_remainder=True)
def make_one_shot_iterator(self):
"""Get a one time use iterator for the distributed PerDeviceDataset."""
dataset_iterator = self._dataset.make_one_shot_iterator()
return PerDeviceDataIterator(dataset_iterator, self._devices,
self._prefetch_on_device)
def make_initializable_iterator(self):
"""Get an initializable iterator for the distributed PerDeviceDataset."""
dataset_iterator = self._dataset.make_initializable_iterator()
return PerDeviceDataIterator(dataset_iterator, self._devices,
self._prefetch_on_device)
class PerDeviceDataIterator(object):
"""An iterator (like `tf.data.Iterator`) into a `PerDeviceDataset`."""
def __init__(self, iterator, devices, prefetch_on_device=None):
self._iterator = iterator
self._devices = devices
self._prefetch_on_device = prefetch_on_device
@property
def initializer(self):
return self._iterator.initializer
def get_next(self, name=None):
"""Scatter the input across devices."""
if self._prefetch_on_device:
data_list = self._iterator.get_next(name=name)
index = dict(zip(self._devices, data_list))
else:
batch = self._iterator.get_next(name=name)
index = {}
def get_ith(i):
return lambda x: x[i]
for i, d in enumerate(self._devices):
index[d] = nest.map_structure(get_ith(i), batch)
if context.executing_eagerly():
with ops.device(d):
index[d] = nest.map_structure(array_ops.identity, index[d])
return regroup(index)
So, as far as I understand, and first, my dataset_fn()
function is just called to obtain dataset object, and then a batch with size of number of GPUs is applied on top of it. Elements of this batch which must be actual batches defined in my dataset initialization inside dataset_fn()
are assigned to different devices.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With