Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to convert Pytorch autograd.Variable to Numpy?

The title says it all. I want to convert a PyTorch autograd.Variable to its equivalent numpy array. In their official documentation they advocated using a.numpy() to get the equivalent numpy array (for PyTorch tensor). But this gives me the following error:

Traceback (most recent call last): File "stdin", line 1, in module File "/home/bishwajit/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 63, in getattr raise AttributeError(name) AttributeError: numpy

Is there any way I can circumvent this?

like image 975
Bishwajit Purkaystha Avatar asked Jun 03 '17 05:06

Bishwajit Purkaystha


People also ask

What is Autograd variable in PyTorch?

Autograd is a PyTorch package for the differentiation for all operations on Tensors. It performs the backpropagation starting from a variable. In deep learning, this variable often holds the value of the cost function. Backward executes the backward pass and computes all the backpropagation gradients automatically.

How does Autograd work in PyTorch?

Autograd is reverse automatic differentiation system. Conceptually, autograd records a graph recording all of the operations that created the data as you execute operations, giving you a directed acyclic graph whose leaves are the input tensors and roots are the output tensors.

What is Autograd function?

autograd provides classes and functions implementing automatic differentiation of arbitrary scalar valued functions. It requires minimal changes to the existing code - you only need to declare Tensor s for which gradients should be computed with the requires_grad=True keyword.

What is CTX in torch Autograd function?

static Function. backward (ctx, *grad_outputs)[source] Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function). This function is to be overridden by all subclasses.


2 Answers

Two possible case

  • Using GPU: If you try to convert a cuda float-tensor directly to numpy like shown below,it will throw an error.

    x.data.numpy()

    RuntimeError: numpy conversion for FloatTensor is not supported

    So, you cant covert a cuda float-tensor directly to numpy, instead you have to convert it into a cpu float-tensor first, and try converting into numpy, like shown below.

    x.data.cpu().numpy()

  • Using CPU: Converting a CPU tensor is straight forward.

    x.data.numpy()

like image 129
blitu12345 Avatar answered Nov 14 '22 04:11

blitu12345


I have found the way. Actually, I can first extract the Tensor data from the autograd.Variable by using a.data. Then the rest part is really simple. I just use a.data.numpy() to get the equivalent numpy array. Here's the steps:

a = a.data  # a is now torch.Tensor
a = a.numpy()  # a is now numpy array
like image 27
Bishwajit Purkaystha Avatar answered Nov 14 '22 04:11

Bishwajit Purkaystha