Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensor multiplication with numpy tensordot

I have a tensor U composed of n matrices of dimension (d,k) and a matrix V of dimension (k,n).

I would like to multiply them so that the result returns a matrix of dimension (d,n) in which column j is the result of the matrix multiplication between the matrix j of U and the column j of V.

enter image description here

One possible way to obtain this is:

for j in range(n):
    res[:,j] = U[:,:,j] * V[:,j]

I am wondering if there is a faster approach using numpy library. In particular I'm thinking of the np.tensordot() function.

This small snippet allows me to multiply a single matrix by a scalar, but the obvious generalization to a vector is not returning what I was hoping for.

a = np.array(range(1, 17))
a.shape = (4,4)
b = np.array((1,2,3,4,5,6,7))
r1 = np.tensordot(b,a, axes=0)

Any suggestion?

like image 991
Matteo Avatar asked Mar 04 '16 01:03

Matteo


1 Answers

There are a couple of ways you could do this. The first thing that comes to mind is np.einsum:

# some fake data
gen = np.random.RandomState(0)
ni, nj, nk = 10, 20, 100
U = gen.randn(ni, nj, nk)
V = gen.randn(nj, nk)

res1 = np.zeros((ni, nk))
for k in range(nk):
    res1[:,k] = U[:,:,k].dot(V[:,k])

res2 = np.einsum('ijk,jk->ik', U, V)

print(np.allclose(res1, res2))
# True

np.einsum uses Einstein notation to express tensor contractions. In the expression 'ijk,jk->ik' above, i,j and k are subscripts that correspond to the different dimensions of U and V. Each comma-separated grouping corresponds to one of the operands passed to np.einsum (in this case U has dimensions ijk and V has dimensions jk). The '->ik' part specifies the dimensions of the output array. Any dimensions with subscripts that aren't present in the output string are summed over.

np.einsum is incredibly useful for performing complex tensor contractions, but it can take a while to fully wrap your head around how it works. You should take a look at the examples in the documentation (linked above).


Some other options:

  1. Element-wise multiplication with broadcasting, followed by summation:

    res3 = (U * V[None, ...]).sum(1)
    
  2. inner1d with a load of transposing:

    from numpy.core.umath_tests import inner1d
    
    res4 = inner1d(U.transpose(0, 2, 1), V.T)
    

Some benchmarks:

In [1]: ni, nj, nk = 100, 200, 1000

In [2]: %%timeit U = gen.randn(ni, nj, nk); V = gen.randn(nj, nk)
   ....: np.einsum('ijk,jk->ik', U, V)
   ....: 
10 loops, best of 3: 23.4 ms per loop

In [3]: %%timeit U = gen.randn(ni, nj, nk); V = gen.randn(nj, nk)
(U * V[None, ...]).sum(1)
   ....: 
10 loops, best of 3: 59.7 ms per loop

In [4]: %%timeit U = gen.randn(ni, nj, nk); V = gen.randn(nj, nk)
inner1d(U.transpose(0, 2, 1), V.T)
   ....: 
10 loops, best of 3: 45.9 ms per loop
like image 50
ali_m Avatar answered Oct 03 '22 04:10

ali_m