Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Understanding tensordot

After I learned how to use einsum, I am now trying to understand how np.tensordot works.

However, I am a little bit lost especially regarding the various possibilities for the parameter axes.

To understand it, as I have never practiced tensor calculus, I use the following example:

A = np.random.randint(2, size=(2, 3, 5)) B = np.random.randint(2, size=(3, 2, 4)) 

In this case, what are the different possible np.tensordot and how would you compute it manually?

like image 578
floflo29 Avatar asked Jan 26 '17 09:01

floflo29


People also ask

What does NP Tensordot do?

Numpy tensordot() is used to calculate the tensor dot product of two given tensors. If we have given two tensors a and b, and two arrays like objects which denote axes, let say a_axes and b_axes. The tensordot() function sum the product of a's elements and b's elements over the axes specified by a_axes and b_axes.

What is Tensordot in Python?

Tensordot (also known as tensor contraction) sums the product of elements from a and b over the indices specified by axes . This operation corresponds to numpy. tensordot(a, b, axes) . Example 1: When a and b are matrices (order 2), the case axes=1 is equivalent to matrix multiplication.

How does Python calculate tensor product?

To compute the tensor dot product, use the numpy. tensordot() method in Python. The a, b parameters are Tensors to “dot”. The axes parameter, integer_like If an int N, sum over the last N axes of a and the first N axes of b in order.

What is a double dot product?

A double dot product is the two tensor's contraction according to the first tensor's last two values and the second tensor's first two values. It contains two definitions.


2 Answers

The idea with tensordot is pretty simple - We input the arrays and the respective axes along which the sum-reductions are intended. The axes that take part in sum-reduction are removed in the output and all of the remaining axes from the input arrays are spread-out as different axes in the output keeping the order in which the input arrays are fed.

Let's look at few sample cases with one and two axes of sum-reductions and also swap the input places and see how the order is kept in the output.

I. One axis of sum-reduction

Inputs :

 In [7]: A = np.random.randint(2, size=(2, 6, 5))    ...:  B = np.random.randint(2, size=(3, 2, 4))    ...:  

Case #1:

In [9]: np.tensordot(A, B, axes=((0),(1))).shape Out[9]: (6, 5, 3, 4)  A : (2, 6, 5) -> reduction of axis=0 B : (3, 2, 4) -> reduction of axis=1  Output : `(2, 6, 5)`, `(3, 2, 4)` ===(2 gone)==> `(6,5)` + `(3,4)` => `(6,5,3,4)` 

Case #2 (same as case #1 but the inputs are fed swapped):

In [8]: np.tensordot(B, A, axes=((1),(0))).shape Out[8]: (3, 4, 6, 5)  B : (3, 2, 4) -> reduction of axis=1 A : (2, 6, 5) -> reduction of axis=0  Output : `(3, 2, 4)`, `(2, 6, 5)` ===(2 gone)==> `(3,4)` + `(6,5)` => `(3,4,6,5)`. 

II. Two axes of sum-reduction

Inputs :

In [11]: A = np.random.randint(2, size=(2, 3, 5))     ...: B = np.random.randint(2, size=(3, 2, 4))     ...:  

Case #1:

In [12]: np.tensordot(A, B, axes=((0,1),(1,0))).shape Out[12]: (5, 4)  A : (2, 3, 5) -> reduction of axis=(0,1) B : (3, 2, 4) -> reduction of axis=(1,0)  Output : `(2, 3, 5)`, `(3, 2, 4)` ===(2,3 gone)==> `(5)` + `(4)` => `(5,4)` 

Case #2:

In [14]: np.tensordot(B, A, axes=((1,0),(0,1))).shape Out[14]: (4, 5)  B : (3, 2, 4) -> reduction of axis=(1,0) A : (2, 3, 5) -> reduction of axis=(0,1)  Output : `(3, 2, 4)`, `(2, 3, 5)` ===(2,3 gone)==> `(4)` + `(5)` => `(4,5)` 

We can extend this to as many axes as possible.

like image 113
Divakar Avatar answered Sep 25 '22 06:09

Divakar


tensordot swaps axes and reshapes the inputs so it can apply np.dot to 2 2d arrays. It then swaps and reshapes back to the target. It may be easier to experiment than to explain. There's no special tensor math going on, just extending dot to work in higher dimensions. tensor just means arrays with more than 2d. If you are already comfortable with einsum then it will be simplest compare the results to that.

A sample test, summing on 1 pair of axes

In [823]: np.tensordot(A,B,[0,1]).shape Out[823]: (3, 5, 3, 4) In [824]: np.einsum('ijk,lim',A,B).shape Out[824]: (3, 5, 3, 4) In [825]: np.allclose(np.einsum('ijk,lim',A,B),np.tensordot(A,B,[0,1])) Out[825]: True 

another, summing on two.

In [826]: np.tensordot(A,B,[(0,1),(1,0)]).shape Out[826]: (5, 4) In [827]: np.einsum('ijk,jim',A,B).shape Out[827]: (5, 4) In [828]: np.allclose(np.einsum('ijk,jim',A,B),np.tensordot(A,B,[(0,1),(1,0)])) Out[828]: True 

We could do same with the (1,0) pair. Given the mix of dimension I don't think there's another combination.

like image 27
hpaulj Avatar answered Sep 24 '22 06:09

hpaulj