Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it possible to split a network across multiple GPUs in tensorflow?

I plan to run a very large recurrent network (e.g. 2048x5), is it possible to define one layer at one GPU in tensorflow? How should I implement the model to achieve the best efficiency. I understand there is overhead for inter-GPU or GPU-CPU-GPU communication.

like image 424
read Read Avatar asked Mar 30 '16 15:03

read Read


1 Answers

Splitting a large model across multiple GPUs is certainly possible in TensorFlow, but doing it optimally is a hard research problem. In general, you will need to do the following:

  1. Wrap large contiguous regions of your code in a with tf.device(...): block, naming the different GPUs:

    with tf.device("/gpu:0"):
      # Define first layer.
    
    with tf.device("/gpu:1"):
      # Define second layer.
    
    # Define other layers, etc.
    
  2. When building your optimizer, pass the optional argument colocate_gradients_with_ops=True to the optimizer.minimize() method:

    loss = ...
    optimizer = tf.train.AdaGradOptimizer(0.01)
    train_op = optimizer.minimize(loss, colocate_gradients_with_ops=True)
    
  3. (Optionally.) You may need to enable "soft placement" in the tf.ConfigProto when you create your tf.Session, if any of the operations in your model cannot run on GPU:

    config = tf.ConfigProto(allow_soft_placement=True)
    sess = tf.Session(config=config)
    
like image 100
mrry Avatar answered Sep 30 '22 10:09

mrry