NumPy Matrix Vector Multiplication With the numpy. matmul() Method. To calculate the product of two matrices, the column number of the first matrix must be equal to the row number of the second matrix.
the * operator (and arithmetic operators in general) were defined as element-wise operations on ndarrays and as matrix-multiplication on numpy. matrix type. method/function dot was used for matrix multiplication of ndarrays.
Because the Numpy array is densely packed in memory due to its homogeneous type, it also frees the memory faster. So overall a task executed in Numpy is around 5 to 100 times faster than the standard python list, which is a significant leap in terms of speed.
matmul and both outperform np. dot .
Use numpy.dot
or a.dot(b)
. See the documentation here.
>>> a = np.array([[ 5, 1 ,3],
[ 1, 1 ,1],
[ 1, 2 ,1]])
>>> b = np.array([1, 2, 3])
>>> print a.dot(b)
array([16, 6, 8])
This occurs because numpy arrays are not matrices, and the standard operations *, +, -, /
work element-wise on arrays.
Note that while you can use numpy.matrix
(as of early 2021) where *
will be treated like standard matrix multiplication, numpy.matrix
is deprecated and may be removed in future releases.. See the note in its documentation (reproduced below):
It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future.
Thanks @HopeKing.
Also know there are other options:
As noted below, if using python3.5+ the @
operator works as you'd expect:
>>> print(a @ b)
array([16, 6, 8])
If you want overkill, you can use numpy.einsum
. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own.
>>> np.einsum('ji,i->j', a, b)
array([16, 6, 8])
As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul
, which works like numpy.dot
with two major exceptions: no scalar multiplication but it works with stacks of matrices.
>>> np.matmul(a, b)
array([16, 6, 8])
numpy.inner
functions the same way as numpy.dot
for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations).
>>> np.inner(a, b)
array([16, 6, 8])
# Beware using for matrix-matrix multiplication though!
>>> b = a.T
>>> np.dot(a, b)
array([[35, 9, 10],
[ 9, 3, 4],
[10, 4, 6]])
>>> np.inner(a, b)
array([[29, 12, 19],
[ 7, 4, 5],
[ 8, 5, 6]])
If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot
with the optional argument axes=1
:
>>> np.tensordot(a, b, axes=1)
array([16, 6, 8])
Don't use numpy.vdot
if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m
vs n
).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With