I'm new to pytorch. I read much pytorch code which heavily uses tensor's .data
member. But I search .data
in the official document and Google, finding little. I guess .data
contains the data in the tensor, but I don't know when we need it and when not?
. data was an attribute of Variable (object representing Tensor with history tracking e.g. for automatic update), not Tensor . Actually, . data was giving access to the Variable 's underlying Tensor . However, since PyTorch version 0.4.
A torch.Tensor is a multi-dimensional matrix containing elements of a single data type.
PyTorchServer Side ProgrammingProgramming. Tensor. detach() is used to detach a tensor from the current computational graph. It returns a new tensor that doesn't require a gradient. When we don't need a tensor to be traced for the gradient computation, we detach the tensor from the current computational graph.
torch. stack (tensors, dim=0, *, out=None) → Tensor. Concatenates a sequence of tensors along a new dimension. All tensors need to be of the same size.
.data
was an attribute of Variable
(object representing Tensor
with history tracking e.g. for automatic update), not Tensor
. Actually, .data
was giving access to the Variable
's underlying Tensor
.
However, since PyTorch version 0.4.0
, Variable
and Tensor
have been merged (into an updated Tensor
structure), so .data
disappeared along the previous Variable
object (well Variable
is still there for backward-compatibility, but is deprecated).
Paragraph from Release Notes for version 0.4.0
(I recommend reading the whole section about Variable
/Tensor
updates):
What about
.data
?
.data
was the primary way to get the underlyingTensor
from aVariable
. After this merge, callingy = x.data
still has similar semantics. Soy
will be aTensor
that shares the same data withx
, is unrelated with the computation history ofx
, and hasrequires_grad=False
.However,
.data
can be unsafe in some cases. Any changes onx.data
wouldn't be tracked byautograd
, and the computed gradients would be incorrect ifx
is needed in a backward pass. A safer alternative is to usex.detach()
, which also returns aTensor
that shares data withrequires_grad=False
, but will have its in-place changes reported byautograd
ifx
is needed in backward.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With