Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Running LSTM with multiple GPUs gets "Input and hidden tensors are not at the same device"

I am trying to train a LSTM layer in pytorch. I am using 4 GPUs. When initializing, I added the .cuda() function move the hidden layer to GPU. But when I run the code with multiple GPUs I am getting this runtime error :

RuntimeError: Input and hidden tensors are not at the same device

I have tried to solve the problem by using .cuda() function in the forward function like below :

self.hidden = (self.hidden[0].type(torch.FloatTensor).cuda(), self.hidden[1].type(torch.FloatTensor).cuda()) 

This line seems to solve the problem, but it raises my concern that if the updated hidden layer is seen in different GPUs. Should I move the vector back to cpu at the end of the forward function for a batch or is there any other way to solve the problem.

like image 384
ida Avatar asked Feb 04 '19 07:02

ida


1 Answers

When you call .cuda() on the tensor, Pytorch moves it to the current GPU device by default (GPU-0). So, due to data parallelism, your data lives in a different GPU while your model goes to another, this results in the runtime error you are facing.

The correct way to implement data parallelism for recurrent neural networks is as follows:

from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence

class MyModule(nn.Module):
    # ... __init__, other methods, etc.

    # padded_input is of shape [B x T x *] (batch_first mode) and contains
    # the sequences sorted by lengths
    #   B is the batch size
    #   T is max sequence length
    def forward(self, padded_input, input_lengths):
        total_length = padded_input.size(1)  # get the max sequence length
        packed_input = pack_padded_sequence(padded_input, input_lengths,
                                            batch_first=True)
        packed_output, _ = self.my_lstm(packed_input)
        output, _ = pad_packed_sequence(packed_output, batch_first=True,
                                        total_length=total_length)
        return output

m = MyModule().cuda()
dp_m = nn.DataParallel(m)

You also need to set the CUDA_VISIBLE_DEVICES environment variable accordingly for a multi GPU setup.

References:

  • Data Parallelism
  • Fast.ai Forums
  • RNNs and Data Parallelism
like image 115
sayan Avatar answered Sep 21 '22 22:09

sayan