Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I get the value of a tensor in PyTorch?

Printing a tensor x gives:

>>> x = torch.tensor([3])
>>> print(x)
tensor([3])

Indexing x.data gives:

>>> x.data[0]
tensor(3)

How do I get just a regular non-tensor value 3?

like image 574
apostofes Avatar asked Oct 12 '22 00:10

apostofes


People also ask

How do you find the value of tensor?

The easiest[A] way to evaluate the actual value of a Tensor object is to pass it to the Session. run() method, or call Tensor. eval() when you have a default session (i.e. in a with tf. Session(): block, or see below).

What does tensor item () do?

item. Returns the value of this tensor as a standard Python number. This only works for tensors with one element.

How do you assign a value to a tensor in PyTorch?

Assigning a new value in the tensor will modify the tensor with the new value. Import the torch libraries and then create a PyTorch tensor. Access values of the tensor. Modify a value with a new value by using the assignment operator.


2 Answers

You can use x.item() to get a Python number from a Tensor that has one element.

like image 189
Vimal Thilak Avatar answered Oct 19 '22 19:10

Vimal Thilak


To get a value from single element tensor x.item() works always:

Example : Single element tensor on CPU

x = torch.tensor([3])
x.item()

Output:

3

Example : Single element tensor on CPU with AD

x = torch.tensor([3.], requires_grad=True)
x.item()

Output:

3.0

NOTE: We needed to use floating point arithmetic for AD

Example : Single element tensor on CUDA

x = torch.tensor([3], device='cuda')
x.item()

Output:

3

Example : Single element tensor on CUDA with AD

x = torch.tensor([3.], device='cuda', requires_grad=True)
x.item()

Output:

3.0

Example : Single element tensor on CUDA with AD again

x = torch.ones((1,1), device='cuda', requires_grad=True)
x.item()

Output:

1.0

To get a value from non single element tensor we have to be careful:

The next example will show that PyTorch tensor residing on CPU shares the same storage as numpy array na

Example: Shared storage

import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
na[0][0]=10
print(na)
print(a)

Output:

tensor([[1., 1.]])
[[10.  1.]]
tensor([[10.,  1.]])

Example: Eliminate effect of shared storage, copy numpy array first

To avoid the effect of shared storage we need to copy() the numpy array na to a new numpy array nac. Numpy copy() method creates the new separate storage.

import torch
a = torch.ones((1,2))
print(a)
na = a.numpy()
nac = na.copy()
nac[0][0]=10
​print(nac)
print(na)
print(a)

Output:

tensor([[1., 1.]])
[[10.  1.]]
[[1. 1.]]
tensor([[1., 1.]])

Now, just the nac numpy array will be altered with the line nac[0][0]=10, na and a will remain as is.

Example: CPU tensor requires_grad=True

import torch
a = torch.ones((1,2), requires_grad=True)
print(a)
na = a.detach().numpy()
na[0][0]=10
print(na)
print(a)

Output:

tensor([[1., 1.]], requires_grad=True)
[[10.  1.]]
tensor([[10.,  1.]], requires_grad=True)

In here we call:

na = a.numpy()

This would cause: RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead., because tensors that require_grad=True are recorded by PyTorch AD.

This is why we need to detach() them first before converting using numpy().

Example: CUDA tensor requires_grad=False

a = torch.ones((1,2), device='cuda')
print(a)
na = a.to('cpu').numpy()
na[0][0]=10
print(na)
print(a)

Output:

tensor([[1., 1.]], device='cuda:0')
[[10.  1.]]
tensor([[1., 1.]], device='cuda:0')

​ In here we just don't convert the CUDA tensor to CPU. There is no effect of shared storage here.

Example: CUDA tensor requires_grad=True

a = torch.ones((1,2), device='cuda', requires_grad=True)
print(a)
na = a.detach().to('cpu').numpy()
na[0][0]=10
​print(na)
print(a)

Output:

tensor([[1., 1.]], device='cuda:0', requires_grad=True)
[[10.  1.]]
tensor([[1., 1.]], device='cuda:0', requires_grad=True)

Without detach() method the error RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. will be set.

Without .to('cpu') method TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. will be set.

like image 34
prosti Avatar answered Oct 19 '22 18:10

prosti