I have an error in my code which is not getting fixed any which way I try.
The Error is simple, I return a value:
torch.exp(-LL_total/T_total)
and get the error later in the pipeline:
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
Solutions such as cpu().detach().numpy()
give the same error.
How could I fix it? Thanks.
You can't call . numpy() on a tensor if that tensor is part of the computation graph. You first have to detach it from the graph and this will return a new tensor that shares the same underlying storage but doesn't track gradients ( requires_grad is False ).
detach() operation. This operation detaches the tensor from the current computational graph. Now we cannot compute the gradient with respect to this tensor. After the detach() operation, we use the . numpy() method to convert it to a Numpy array.
PyTorchServer Side ProgrammingProgramming. Tensor. detach() is used to detach a tensor from the current computational graph. It returns a new tensor that doesn't require a gradient. When we don't need a tensor to be traced for the gradient computation, we detach the tensor from the current computational graph.
The use of "with torch. no_grad()" is like a loop where every tensor inside the loop will have requires_grad set to False. It means any tensor with gradient currently attached with the current computational graph is now detached from the current graph.
import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.numpy() print(tensor1) print(type(tensor1))
which leads to the exact same error for the line tensor1 = tensor1.numpy()
:
tensor([1., 2.], requires_grad=True) <class 'torch.Tensor'> Traceback (most recent call last): File "/home/badScript.py", line 8, in <module> tensor1 = tensor1.numpy() RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. Process finished with exit code 1
this was suggested to you in your error message, just replace var
with your variable name
import torch tensor1 = torch.tensor([1.0,2.0],requires_grad=True) print(tensor1) print(type(tensor1)) tensor1 = tensor1.detach().numpy() print(tensor1) print(type(tensor1))
which returns as expected
tensor([1., 2.], requires_grad=True) <class 'torch.Tensor'> [1. 2.] <class 'numpy.ndarray'> Process finished with exit code 0
You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. This other tensor can be converted to a numpy array. Cf. this discuss.pytorch post. (I think, more precisely, that one needs to do that in order to get the actual tensor out of its pytorch Variable
wrapper, cf. this other discuss.pytorch post).
I had the same error message but it was for drawing a scatter plot on matplotlib.
There is 2 steps I could get out of this error message :
import the fastai.basics
library with : from fastai.basics import *
If you only use the torch
library, remember to take off the requires_grad
with :
with torch.no_grad(): (your code)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With