Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does Pytorch expect a DoubleTensor instead of a FloatTensor?

From everything I see online, FloatTensors are Pytorch's default for everything, and when I create a tensor to pass to my generator module it is a FloatTensor, but when I try to run it through a linear layer it complains that it wants a DoubleTensor.

class Generator(nn.Module):
  def __init__(self):
    super(Generator, self).__init__()
    self.fully_connected = nn.Linear(100, 1024*4*4, bias=False)

  def forward(self, zvec):
    print(zvec.size())
    fc = self.fully_connected(zvec)
    return(fc.size())

gen = Generator();

gen(torch.from_numpy(np.random.normal(size=100)))

Which produces

RuntimeError: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 'mat2'
like image 696
jntrcs Avatar asked Dec 07 '25 04:12

jntrcs


1 Answers

The Problem here is that your numpy input uses double as data type the same data type is also applied on the resulting tensor.

The weights of your layer self.fully_connected on the other hand are float. When feeding data trough the layer a matrix multiplication is applied and this multiplication requires both matrices to be of same data type.

So you have two solutions:

  • You can convert your input to float:

By changing:

gen(torch.from_numpy(np.random.normal(size=100)))

To:

gen(torch.from_numpy(np.random.normal(size=100)).float())

Your input which is fed into gen will be converted to float then.

Full working code for converting inputs:

from torch import nn
import torch
import numpy as np
class Generator(nn.Module):
    def __init__(self):
        super(Generator, self).__init__()
        self.fully_connected = nn.Linear(100, 1024*4*4, bias=False)

    def forward(self, zvec):
        print(zvec.size())
        fc = self.fully_connected(zvec)
        return(fc.size())

gen = Generator();
gen(torch.from_numpy(np.random.normal(size=100)).float()) # converting network input to float

  • Or alternatively you can convert your layer weights to double:

If you need the double precision you can also convert your weights to double.

Change this line:

self.fully_connected = nn.Linear(100, 1024*4*4, bias=False)

Just to:

self.fully_connected = nn.Linear(100, 1024*4*4, bias=False).double()

Full working code for converting weights:

from torch import nn
import torch
import numpy as np
class Generator(nn.Module):
    def __init__(self):
        super(Generator, self).__init__()
        self.fully_connected = nn.Linear(100, 1024*4*4, bias=False).double() # converting layer weights to double()

    def forward(self, zvec):
        print(zvec.size())
        fc = self.fully_connected(zvec)
        return(fc.size())

gen = Generator();
gen(torch.from_numpy(np.random.normal(size=100)))

So both ways should work for you, but if you don't need the extra precision of double you should go with float as double requires more computational power.

like image 149
MBT Avatar answered Dec 08 '25 17:12

MBT