Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Add / substract between matrix and vector in pytorch

I want to do + / - / * between matrix and vector in pytorch. How can I do with good performance? I tried to use expand, but it's really slow (I am using big matrix with small vector).

a = torch.rand(2,3)
print(a)
 0.7420  0.2990  0.3896
 0.0715  0.6719  0.0602
[torch.FloatTensor of size 2x3]
b = torch.rand(2)
print(b)
 0.3773
 0.6757
[torch.FloatTensor of size 2]
a.add(b)
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3066, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-17-a1cb1b03d031>", line 1, in <module>
    a.add(b)
RuntimeError: inconsistent tensor size, expected r_ [2 x 3], t [2 x 3] and src [2] to have the same number of elements, but got 6, 6 and 2 elements respectively at c:\miniconda2\conda-bld\pytorch-cpu_1519449358620\work\torch\lib\th\generic/THTensorMath.c:1021

Expected result:

 0.7420-0.3773  0.2990-0.3773  0.3896-0.3773
 0.0715-0.6757  0.6719-0.6757  0.0602-0.6757
like image 787
Yoav Chai Avatar asked Jun 29 '18 08:06

Yoav Chai


People also ask

How do you subtract in PyTorch?

To perform element-wise subtraction on tensors, we can use the torch. sub() method of PyTorch. The corresponding elements of the tensors are subtracted. We can subtract a scalar or tensor from another tensor.

How do you calculate L2 norm PyTorch?

The L2 norm is calculated as the square root of the sum of the squared vector values."

What is norm in PyTorch?

Definition of PyTorch norm. PyTorch provides the different types of functionality to the user, in which that norm is one the functionality that is provided by the PyTorch. Basically in deep learning sometimes we need to fetch the matrix or vector from the input tensor.


2 Answers

To make use of broadcasting, you need to promote the dimension of the tensor b to two dimensions since the tensor a is 2D.

In [43]: a
Out[43]: 
tensor([[ 0.9455,  0.2088,  0.1070],
        [ 0.0823,  0.6509,  0.1171]])

In [44]: b
Out[44]: tensor([ 0.4321,  0.8250])

# subtraction    
In [46]: a - b[:, None]
Out[46]: 
tensor([[ 0.5134, -0.2234, -0.3252],
        [-0.7427, -0.1741, -0.7079]])

# alternative way to do subtraction
In [47]: a.sub(b[:, None])
Out[47]: 
tensor([[ 0.5134, -0.2234, -0.3252],
        [-0.7427, -0.1741, -0.7079]])

# yet another approach
In [48]: torch.sub(a, b[:, None])
Out[48]: 
tensor([[ 0.5134, -0.2234, -0.3252],
        [-0.7427, -0.1741, -0.7079]])

The other operations (+, *) can be done analogously.


In terms of performance, there seems to be no advantage of using one approach over the others. Just use any one of the three approaches.

In [49]: a = torch.rand(2000, 3000)
In [50]: b = torch.rand(2000)

In [51]: %timeit torch.sub(a, b[:, None])
2.4 ms ± 8.31 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [52]: %timeit a.sub(b[:, None])
2.4 ms ± 6.94 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [53]: %timeit a - b[:, None]
2.4 ms ± 12 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
like image 92
kmario23 Avatar answered Oct 16 '22 17:10

kmario23


Have you tried the .unsqueeze_() method? You can add a dimension to a tensor in place using .unsqueeze_() method. I believe this would be much faster. As an argument you need to pass the axis index along which you need to expand.

a = torch.rand(2,3)
print(a)
"""Output 
0.9323  0.9162  0.9505
0.9430  0.6184  0.3671
[torch.FloatTensor of size 2x3]"""

b = torch.rand(2)
print(b)
"""Output
0.4723
0.9345
[torch.FloatTensor of size 2]"""

b.unsqueeze_(1)
"""Output
0.4723
0.9345
torch.FloatTensor of size 2x1]"""

a.add(b)
"""Output
1.4046  1.3885  1.4228
1.8775  1.5528  1.3016
[torch.FloatTensor of size 2x3]"""
like image 38
Astha Sharma Avatar answered Oct 16 '22 16:10

Astha Sharma