I am new to pytorch. While playing around with tensors I observed 2 types of tensors-
tensor(58)
tensor([57.3895])
I printed their shape and the output was respectively -
torch.Size([])
torch.Size([1])
What is the difference between the two?
To get the shape of a tensor as a list in PyTorch, we can use two approaches. One using the size() method and another by using the shape attribute of a tensor in PyTorch.
We access the size (or shape) of a tensor and the number of elements in the tensor as the metadata of the tensor. To access the size of a tensor, we use the . size() method and the shape of a tensor is accessed using .
torch. Tensor(10) will return an uninitialized FloatTensor with 10 values, while torch. tensor(10) will return a LongTensor containing a single value ( 10 ). I would recommend to use the second approach (lowercase t) or any other factory method instead of creating uninitialized tensors via torch.
A tensor is a vector or matrix of n-dimensions that represents all types of data. All values in a tensor hold identical data type with a known (or partially known) shape. The shape of the data is the dimensionality of the matrix or array. A tensor can be originated from the input data or the result of a computation.
You can play with tensors having the single scalar value like this:
import torch
t = torch.tensor(1)
print(t, t.shape) # tensor(1) torch.Size([])
t = torch.tensor([1])
print(t, t.shape) # tensor([1]) torch.Size([1])
t = torch.tensor([[1]])
print(t, t.shape) # tensor([[1]]) torch.Size([1, 1])
t = torch.tensor([[[1]]])
print(t, t.shape) # tensor([[[1]]]) torch.Size([1, 1, 1])
t = torch.unsqueeze(t, 0)
print(t, t.shape) # tensor([[[[1]]]]) torch.Size([1, 1, 1, 1])
t = torch.unsqueeze(t, 0)
print(t, t.shape) # tensor([[[[[1]]]]]) torch.Size([1, 1, 1, 1, 1])
t = torch.unsqueeze(t, 0)
print(t, t.shape) # tensor([[[[[[1]]]]]]) torch.Size([1, 1, 1, 1, 1, 1])
#squize dimension with id 0
t = torch.squeeze(t,dim=0)
print(t, t.shape) # tensor([[[[[1]]]]]) torch.Size([1, 1, 1, 1, 1])
#back to beginning.
t = torch.squeeze(t)
print(t, t.shape) # tensor(1) torch.Size([])
print(type(t)) # <class 'torch.Tensor'>
print(type(t.data)) # <class 'torch.Tensor'>
Tensors, do have a size or shape. Which is the same. Which is actually a class torch.Size
.
You can write help(torch.Size)
to get more info.
Any time you write t.shape
, or t.size()
you will get that size info.
The idea of tensors is they can have different compatible size dimension for the data inside it including torch.Size([])
.
Any time you unsqueeze a tensor it will add another dimension of 1. Any time you squeeze a tensor it will remove dimensions of 1, or in general case all dimensions of one.
First one has 0
size dimension, second one has 1
dimension, PyTorch tries to make both compatible (0
size can be regarded similarly to float
or a-like although I haven't really met the case where it's explicitly needed, except what @javadr shown in his answer below).
Usually you would use list
to initialize it though, see here for more information.
Look at the documentation of tensor
in pytorch
:
Docstring:
tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor
Constructs a tensor with :attr:`data`.
then it describes what the data is:
Args:
data (array_like): Initial data for the tensor. Can be a list, tuple,
NumPy ``ndarray``, scalar, and other types.
As you can see the data
could be a scalar (which is a data with the dimension of zero).
Thus, in response to your question the tensor(58)
is a tensor with dimension 0
and the tensor([58])
is a tensor with dimension 1
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With