Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0?

Pytorch 0.4.0 introduced the merging on the Tensor and Variable classes.

Before this version, when I wanted to create a Variable with autograd from a numpy array I would do the following (where x is a numpy array):

x = Variable(torch.from_numpy(x).float(), requires_grad=True)

With PyTorch version 0.4.0, the migration guide shows how we can create Tensors with autograd enabled, examples show you can do things such as

x = torch.ones(3, 4, requires_grad=True) 

and also set requires_grad to existing tensors

existing_tensor.requires_grad_()

I have tried the following three things to try and create a Tensor with requires_grad=True which give errors (where x is a numpy array):

The first is

x = FloatTensor(x, requires_grad=True)

which gives the error

TypeError: new() received an invalid combination of arguments - got 
(numpy.ndarray, requires_grad=bool), but expected one of:
 * (torch.device device)
 * (tuple of ints size, torch.device device)
      didn't match because some of the keywords were incorrect: 
requires_grad
 * (torch.Storage storage)
 * (Tensor other)
 * (object data, torch.device device)
      didn't match because some of the keywords were incorrect: 
requires_grad

The second is to do

x = FloatTensor(x)
x.requires_grad()

And the third is

x = torch.from_numpy(x).single()
x.requires_grad()

Which both throw the following error on the second line:

TypeError: 'bool' object is not callable

These errors give me little hint at what I'm doing wrong, and since the latest version is so new its hard to find content online to help. How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0, preferably in a single line?

like image 535
HBeel Avatar asked Apr 29 '18 13:04

HBeel


People also ask

What is Requires_grad in PyTorch?

Setting requires_grad Parameter , that allows for fine-grained exclusion of subgraphs from gradient computation. It takes effect in both the forward and backward passes: During the forward pass, an operation is only recorded in the backward graph if at least one of its input tensors require grad.

What is Autograd variable in PyTorch?

Autograd is a PyTorch package for the differentiation for all operations on Tensors. It performs the backpropagation starting from a variable. In deep learning, this variable often holds the value of the cost function. Backward executes the backward pass and computes all the backpropagation gradients automatically.

How do you make a torch tensor?

To create a tensor with pre-existing data, use torch.tensor() . To create a tensor with specific size, use torch.* tensor creation ops (see Creation Ops). To create a tensor with the same size (and similar types) as another tensor, use torch.*_like tensor creation ops (see Creation Ops).

What is Torch Autograd function?

torch. autograd provides classes and functions implementing automatic differentiation of arbitrary scalar valued functions. It requires minimal changes to the existing code - you only need to declare Tensor s for which gradients should be computed with the requires_grad=True keyword.


1 Answers

How can I make a FloatTensor with requires_grad=True from a numpy array using PyTorch 0.4.0, preferably in a single line?

If x is your numpy array this line should do the trick:

torch.tensor(x, requires_grad=True)

Here is a full example tested with PyTorch 0.4.0:

import numpy as np
import torch

x = np.array([1.3, 0.5, 1.9, 2.45])
print('np.array:', x)
t = torch.tensor(x, requires_grad=True)
print('tensor:', t)
print('requires_grad:', t.requires_grad)

This gives the following output:

np.array: [1.3  0.5  1.9  2.45]
tensor: tensor([ 1.3000,  0.5000,  1.9000,  2.4500], dtype=torch.float64)
requires_grad: True

Edit: dtype should be determined by the given dtype of your numpy array x.

I hope this helps.

like image 183
MBT Avatar answered Sep 21 '22 12:09

MBT