Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensor type mismatch when moving to GPU

I'm getting the following error when trying to move my network and tensors to GPU. I've checked that the network parameters are moved to the GPU and check each batch's tensor and move them if they're not already on the GPU. But I'm still getting this issue say that there's a mismatch in the tensor types - one is a torch.cuda.FloatTensor and the other is a torch.FloatTensor? Could someone tell me what I'm doing wrong? Thanks.

My code:

class Train():
  def __init__(self, network, training, address):
    self.network    = network
    self.address    = address
    self.batch_size = training['batch_size']
    self.iterations = training['iterations']
    self.samples    = training['samples']
    self.data       = training['data']
    self.lr         = training['lr']
    self.noisy_lr   = training['nlr']
    self.cuda       = training['cuda']
    self.save       = training['save']
    self.scale      = training['scale']
    self.limit      = training['limit']
    self.replace    = training['strategy']
    self.optimizer  = torch.optim.Adam(self.network.parameters(), lr=self.lr)

  def tensor_to_Variable(self, t):
    if next(self.network.parameters()).is_cuda and not t.is_cuda:
        t = t.cuda()

    return Variable(t)

  def train(self):
    if self.cuda:
        self.network.cuda()
    dh = DataHandler(self.data)
    loss_fn = torch.nn.MSELoss()
    losses    = []
    validate  = []
    val_size  = 100
    val_diff  = 1
    total_val = float(val_size * self.batch_size)
    hypos     = []
    labels    = []

    # training loop
    for i in range(self.iterations):
        x, y = dh.get_batch(self.batch_size)
        x = self.tensor_to_Variable(x)
        y = self.tensor_to_Variable(y)

        self.optimizer.zero_grad()
        hypo = self.network(x)
        loss = loss_fn(hypo, y)
        loss.backward()
        self.optimizer.step()


class Feedforward(nn.Module):
   def __init__(self, topology):
    super(Feedforward, self).__init__()
    self.input_dim     = topology['features']
    self.num_hidden    = topology['hidden_layers']
    self.hidden_dim    = topology['hidden_dim']
    self.output_dim    = topology['output_dim']
    self.input_layer   = nn.Linear(self.input_dim, self.hidden_dim)
    self.hidden_layer  = nn.Linear(self.hidden_dim, self.hidden_dim)
    self.output_layer  = nn.Linear(self.hidden_dim, self.output_dim)
    self.dropout_layer = nn.Dropout(p=0.2)


def forward(self, x):
    batch_size = x.size()[0]
    feat_size  = x.size()[1]
    input_size = batch_size * feat_size

    self.input_layer = nn.Linear(input_size, self.hidden_dim)
    hidden = self.input_layer(x.view(1, input_size)).clamp(min=0)

    for _ in range(self.num_hidden):
        hidden = self.dropout_layer(F.relu(self.hidden_layer(hidden)))

    output_size = batch_size * self.output_dim
    self.output_layer = nn.Linear(self.hidden_dim, output_size)
    return self.output_layer(hidden).view(output_size)

The error:

Traceback (most recent call last):
  File "/media/project/train.py", line 78, in train
    hypo = self.network(x)

 * (torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float beta, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float beta, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float beta, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
      didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor)
 * (float beta, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
      didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor)

Stacktrace:

Traceback (most recent call last):
File "smpl.py", line 90, in <module>
main()
File "smpl.py", line 80, in main
trainer.train()
File "/media/mpl/temp/train.py", line 82, in train
hypo = self.network(x)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(input, **kwargs)
File "model/network.py", line 35, in forward
hidden = self.input_layer(x.view(1, input_size)).clamp(min=0)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(input, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/linear.py", line 54, in forward
return self.backend.Linear()(input, self.weight, self.bias)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.py", line 10, in forward
output.addmm(0, 1, input, weight.t())
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.cuda.FloatTensor, torch.FloatTensor), but expected one of: (torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
(torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2) (float beta, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
(float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2) (float beta, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
(float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2) (float beta, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor)
* (float beta, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor
like image 389
Soubriquet Avatar asked Aug 01 '17 20:08

Soubriquet


2 Answers

This is happening because you are re-initializing self.input_layer in your forward() function.

The call self.network.cuda() moves all of the model parameters into cuda. Which means any and all the layers you initialize at the creation of your FeedForward object will be moved to cuda memory. But when you reinitialize self.input_layer in your forward() function, you initialize that layer's parameters in cpu and not gpu. Same goes for self.output_layer.

like image 97
entrophy Avatar answered Nov 11 '22 23:11

entrophy


Firstly, to compute using your GPU, you have to prep your data type to a CUDA tensor.

In this case, it can be done simply as follows.

dtype=torch.cuda.FloatTensor
x=torch.autograd.Variable(x.type(dtype))

You can make the changes according to this in your tensor_to_Variable function.

Secondly, To specify that you want your "network" to expect CUDA tensors, network.cuda() will help.

Lastly, although this is not part of your question, you need not specify batch size while configuring your feed forward network. To elucidate,

1) Forward pass:

def forward(self,x):
    x=self.input_layer(x)
    x=self.middle_layer(x)
    x=self.output_layer(x)
    return x

2) Network initialization

def__init__(self,feature_size,hidden_size,output_size):
     self.input_layer=nn.Linear(feature_size,hidden_size)
     self.middle_layer=nn.Linear(hidden_size,hidden_size)
     self.output_layer=nn.Linear(hidden_size,output_size)

3) Preprocessing your data before packing into CUDA Variable

your_tensor.view(batch_size,feature_size)

Hope this helps!

like image 26
ai-shwarya Avatar answered Nov 11 '22 23:11

ai-shwarya