The PyTorch documentation says:
Returns a new tensor with a dimension of size one inserted at the specified position. [...]
>>> x = torch.tensor([1, 2, 3, 4]) >>> torch.unsqueeze(x, 0) tensor([[ 1, 2, 3, 4]]) >>> torch.unsqueeze(x, 1) tensor([[ 1], [ 2], [ 3], [ 4]])
Unless I am missing something the unsqueeze method does not change the tensor inplace. So in the lines below, since the result of the call is not stored, either these calls are artifact of debugging, either they are meant as inplace calls.
What is * ? For . view() pytorch expects the new shape to be provided by individual int arguments (represented in the doc as *shape ). The asterisk ( * ) can be used in python to unpack a list into its individual elements, thus passing to view the correct form of input arguments it expects.
flatten. Flattens input by reshaping it into a one-dimensional tensor. If start_dim or end_dim are passed, only dimensions starting with start_dim and ending with end_dim are flattened.
If you look at the shape of the array before and after, you see that before it was (4,)
and after it is (1, 4)
(when second parameter is 0
) and (4, 1)
(when second parameter is 1
). So a 1
was inserted in the shape of the array at axis 0
or 1
, depending on the value of the second parameter.
That is opposite of np.squeeze()
(nomenclature borrowed from MATLAB) which removes axes of size 1
(singletons).
It indicates the position on where to add the dimension. torch.unsqueeze
adds an additional dimension to the tensor.
So let's say you have a tensor of shape (3), if you add a dimension at the 0 position, it will be of shape (1,3), which means 1 row and 3 columns:
unsqueeze
turns an n.d. tensor into an (n+1).d. one by adding an extra dimension of depth 1. However, since it is ambiguous which axis the new dimension should lie across (i.e. in which direction it should be "unsqueezed"), this needs to be specified by the dim
argument.
e.g. unsqueeze
can be applied to a 2d tensor three different ways:
The resulting unsqueezed tensors have the same information, but the indices used to access them are different.
Here are the descriptions from the PyTorch docs:
torch.squeeze(input, dim=None, *, out=None)
→ TensorReturns a tensor with all the dimensions of
input
of size 1 removed.For example, if input is of shape: (A×1×B×C×1×D) then the out tensor will be of shape: (A×B×C×D) .
When
dim
is given, a squeeze operation is done only in the given dimension. If input is of shape: (A×1×B) ,squeeze(input, 0)
leaves the tensor unchanged, butsqueeze(input, 1)
will squeeze the tensor to the shape (A×B) .
torch.unsqueeze(input, dim)
→ TensorReturns a new tensor with a dimension of size one inserted at the specified position.
The returned tensor shares the same underlying data with this tensor.
A
dim
value within the range[-input.dim() - 1, input.dim() + 1)
can be used. Negativedim
will correspond tounsqueeze()
applied atdim = dim + input.dim() + 1
.
unsqueeze is a method to change the tensor dimensions, such that operations such as tensor multiplication can be possible. This basically alters the dimension to produce a tensor that has a different dimension.
For example: If you want to multiply your tensor of size(4), with a tensor that has the size (4, N, N) then you'll get an error. But using the unsqueeze method, you can convert the tensor to size (4,1,1). Now since this has an operand of size 1, you'll be able to multiply both the tensors.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With