Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between Parameter vs. Tensor in PyTorch

Tags:

pytorch

I would like to know the difference between PyTorch Parameter and Tensor?

The existing answer is for the old PyTorch where variables are being used?

like image 902
Dex Avatar asked Jun 21 '19 17:06

Dex


People also ask

What is a parameter in PyTorch?

Parameters are Tensor subclasses, that have a very special property when used with Module s - when they're assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator.

What is the difference between tensor and variable PyTorch?

According to the official PyTorch document, Both classes are a multi-dimensional matrix containing elements of a single data type, have the same API, almost any operation provided by tensor can be also done in Variable. The difference between Tensor and Variable is that the Variable is a wrapper of Tensor.

What does tensor mean in PyTorch?

A Pytorch Tensor is basically the same as a NumPy array. This means it does not know anything about deep learning or computational graphs or gradients and is just a generic n-dimensional array to be used for arbitrary numeric computation.

What does NN parameter do?

So as a conclusion, nn. Parameter() receives the tensor that is passed into it, and does not do any initial processing such as uniformization. That means that if the tensor passed into is empty or uninitialized, the parameter will also be empty or uninitialized.


1 Answers

This is the whole idea of the Parameter class (attached) in a single image.

enter image description here

Since it is sub-classed from Tensor it is a Tensor.

But there is a trick. Parameters that are inside of a module are added to the list of Module parameters. If m is your module m.parameters() will hold your parameter.

Here is the example:

class M(nn.Module):
    def __init__(self):
        super().__init__()
        self.weights = nn.Parameter(torch.randn(2, 2))
        self.bias = nn.Parameter(torch.zeros(2))

    def forward(self, x):
        return x @ self.weights + self.bias

m=M()
m.parameters()
list(m.parameters())

---

[Parameter containing:
 tensor([[ 0.5527,  0.7096],
         [-0.2345, -1.2346]], requires_grad=True), Parameter containing:
 tensor([0., 0.], requires_grad=True)]

You see how the parameters will show what we defined. And if we just add a tensor inside a class, like self.t = Tensor, it will not show in the parameters list. That is literally it. Nothing fancy.

like image 70
prosti Avatar answered Sep 21 '22 20:09

prosti