let me take a 2D matrix as example:
mat = torch.arange(9).view(3, -1)
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
torch.sum(mat, dim=-2)
tensor([ 9, 12, 15])
I find the result of torch.sum(mat, dim=-2)
is equal to torch.sum(mat, dim=0)
and dim=-1
equal to dim=1
. My question is how to understand the negative dimension here. What if the input matrix has 3 or more dimensions?
Yes, dim means the dimension, so its meaning is almost the same everywhere in PyTorch. Like in the functioning of torch. chunk it is used to specify the dimension along which to split the tensor.
For . view() pytorch expects the new shape to be provided by individual int arguments (represented in the doc as *shape ). The asterisk ( * ) can be used in python to unpack a list into its individual elements, thus passing to view the correct form of input arguments it expects.
A torch.Tensor is a multi-dimensional matrix containing elements of a single data type.
A tensor has multiple dimensions, ordered as in the following figure. There is a forward and backward indexing. Forward indexing uses positive integers, backward indexing uses negative integers.
Example:
-1 will be the last one, in our case it will be dim=2
-2 will be dim=1
-3 will be dim=0
The minus essentially means you go backwards through the dimensions. Let A be a n-dimensional matrix. Then dim=n-1=-1, dim=n-2=-2, ..., dim=1=-(n-1), dim=0=-n. See the numpy doc for more information, as pytorch is heavily based on numpy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With