I have some doubts on the numpy.dot product.
I define a matrix 6x6 like:
C=np.zeros((6,6))
C[0,0], C[1,1], C[2,2] = 129.5, 129.5, 129.5
C[3,3], C[4,4], C[5,5] = 25, 25, 25
C[0,1], C[0,2] = 82, 82
C[1,0], C[1,2] = 82, 82
C[2,0], C[2,1] = 82, 82
Then I recast it in a 4-rank tensor by using a multidimensional array
def long2short(m, n):
"""
Given two indices m and n of the stiffness tensor the function
return i the index of the Voigt matrix
i = long2short(m,n)
"""
if m == n:
i = m
elif (m == 1 and n == 2) or (m == 2 and n == 1):
i = 3
elif (m == 0 and n == 2) or (m == 2 and n == 0):
i = 4
elif (m == 0 and n == 1) or (m == 1 and n == 0):
i = 5
return i
c=np.zeros((3,3,3,3))
for m in range(3):
for n in range(3):
for o in range(3):
for p in range(3):
i = long2short(m, n)
j = long2short(o, p)
c[m, n, o, p] = C[i, j]
And then I would like to change the coordinate reference system of the tensor by using the rotation matrix that I define like:
Q=np.array([[sqrt(2.0/3), 0, 1.0/sqrt(3)], [-1.0/sqrt(6), 1.0/sqrt(2), 1.0/sqrt(3)], [-1.0/sqrt(6), -1.0/sqrt(2), 1.0/sqrt(3)]])
Qt = Q.transpose()
The matrix is orthogonal (althought the numerical precision is not perfect):
In [157]: np.dot(Q, Qt)
Out[157]:
array([[ 1.00000000e+00, 4.28259858e-17, 4.28259858e-17],
[ 4.28259858e-17, 1.00000000e+00, 2.24240114e-16],
[ 4.28259858e-17, 2.24240114e-16, 1.00000000e+00]])
But why then if I perform:
In [158]: a=np.dot(Q,Qt)
In [159]: c_mat=np.dot(a, c)
In [160]: a1 = np.dot(Qt, c)
In [161]: c_mat1=np.dot(Q, a1)
I get the expected value for c_mat (=c) but not for c_mat1? Is there some subtility to use dot on multidimensional arrays?
If one input is a scalar and one is an array, np. dot() will multiply every value of the array by the scalar (i.e., scalar multiplication). If both inputs are 1-dimensional arrays, np. dot() will compute the dot product of the inputs.
In general numpy arrays can have more than one dimension. One way to create such array is to start with a 1-dimensional array and use the numpy reshape() function that rearranges elements of that array into a new shape.
matmul work perfectly for dot product and matrix multiplication. However, as we said before, it is recommended to use np. dot for dot product and np. matmul for 2D or higher matrix multiplication.
The issue is that np.dot(a,b)
for multidimensional arrays makes the dot product of the last dimension of a
with the second-to-last dimension of b
:
np.dot(a,b) == np.tensordot(a, b, axes=([-1],[2]))
As you see, it does not work as a matrix multiplication for multidimensional arrays. Using np.tensordot()
allows you to control in which axes
from each input you want to perform the dot product. For example, to get the same result in c_mat1
you can do:
c_mat1 = np.tensordot(Q, a1, axes=([-1],[0]))
Which is forcing a matrix multiplication-like behavior.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With