Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pytorch: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead

I have an error in my code which is not getting fixed any which way I try.

The Error is simple, I return a value:

torch.exp(-LL_total/T_total) 

and get the error later in the pipeline:

RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. 

Solutions such as cpu().detach().numpy() give the same error.

How could I fix it? Thanks.

like image 755
tstseby Avatar asked Apr 02 '19 02:04

tstseby


People also ask

Can t call numpy on tensor?

You can't call . numpy() on a tensor if that tensor is part of the computation graph. You first have to detach it from the graph and this will return a new tensor that shares the same underlying storage but doesn't track gradients ( requires_grad is False ).

What is detach Numpy?

detach() operation. This operation detaches the tensor from the current computational graph. Now we cannot compute the gradient with respect to this tensor. After the detach() operation, we use the . numpy() method to convert it to a Numpy array.

What is Pytorch detach?

PyTorchServer Side ProgrammingProgramming. Tensor. detach() is used to detach a tensor from the current computational graph. It returns a new tensor that doesn't require a gradient. When we don't need a tensor to be traced for the gradient computation, we detach the tensor from the current computational graph.

What is Torch No_grad?

The use of "with torch. no_grad()" is like a loop where every tensor inside the loop will have requires_grad set to False. It means any tensor with gradient currently attached with the current computational graph is now detached from the current graph.


2 Answers

 Error reproduced

import torch  tensor1 = torch.tensor([1.0,2.0],requires_grad=True)  print(tensor1) print(type(tensor1))  tensor1 = tensor1.numpy()  print(tensor1) print(type(tensor1)) 

which leads to the exact same error for the line tensor1 = tensor1.numpy():

tensor([1., 2.], requires_grad=True) <class 'torch.Tensor'> Traceback (most recent call last):   File "/home/badScript.py", line 8, in <module>     tensor1 = tensor1.numpy() RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.  Process finished with exit code 1 

Generic solution

this was suggested to you in your error message, just replace var with your variable name

import torch  tensor1 = torch.tensor([1.0,2.0],requires_grad=True)  print(tensor1) print(type(tensor1))  tensor1 = tensor1.detach().numpy()  print(tensor1) print(type(tensor1)) 

which returns as expected

tensor([1., 2.], requires_grad=True) <class 'torch.Tensor'> [1. 2.] <class 'numpy.ndarray'>  Process finished with exit code 0 

Some explanation

You need to convert your tensor to another tensor that isn't requiring a gradient in addition to its actual value definition. This other tensor can be converted to a numpy array. Cf. this discuss.pytorch post. (I think, more precisely, that one needs to do that in order to get the actual tensor out of its pytorch Variable wrapper, cf. this other discuss.pytorch post).

like image 120
Blupon Avatar answered Oct 07 '22 16:10

Blupon


I had the same error message but it was for drawing a scatter plot on matplotlib.

There is 2 steps I could get out of this error message :

  1. import the fastai.basics library with : from fastai.basics import *

  2. If you only use the torch library, remember to take off the requires_grad with :

    with torch.no_grad():     (your code) 
like image 39
Rickantonais Avatar answered Oct 07 '22 17:10

Rickantonais