consider array's a
and b
a = np.array([
[-1, 1, 5],
[-2, 3, 0]
])
b = np.array([
[1, 1, 0],
[0, 2, 3],
])
Looking at
d = a.T.dot(b)
d
array([[-1, -5, -6],
[ 1, 7, 9],
[ 5, 5, 0]])
d[0, 0]
is -1
. and is the sum of a[:, 0] * b[:, 0]
. I'd like a 2x2 array of vectors where the [0, 0]
position would be a[:, 0] * b[:, 0]
.
with the above a
and b
, I'd expect
d = np.array([[a[:, i] * b[:, j] for j in range(a.shape[1])] for i in range(b.shape[1])])
d
array([[[-1, 0],
[-1, -4],
[ 0, -6]],
[[ 1, 0],
[ 1, 6],
[ 0, 9]],
[[ 5, 0],
[ 5, 0],
[ 0, 0]]])
The sum of d
along axis==2
should be the dot product a.T.dot(b)
d.sum(2)
array([[-1, -5, -6],
[ 1, 7, 9],
[ 5, 5, 0]])
What is the most efficient way of getting d
?
Here's one way:
In [219]: a
Out[219]:
array([[-1, 1, 5],
[-2, 3, 0]])
In [220]: b
Out[220]:
array([[1, 1, 0],
[0, 2, 3]])
In [221]: a.T[:,None,:] * b.T[None,:,:]
Out[221]:
array([[[-1, 0],
[-1, -4],
[ 0, -6]],
[[ 1, 0],
[ 1, 6],
[ 0, 9]],
[[ 5, 0],
[ 5, 0],
[ 0, 0]]])
Or...
In [231]: (a[:,None,:] * b[:,:,None]).T
Out[231]:
array([[[-1, 0],
[-1, -4],
[ 0, -6]],
[[ 1, 0],
[ 1, 6],
[ 0, 9]],
[[ 5, 0],
[ 5, 0],
[ 0, 0]]])
Most efficient one would be with broadcasting
as shown in @Warren Weckesser's post
as we are basically dealing with element-wise multiplication without any sum-reduction
.
An alternative one with np.einsum
would be like so -
np.einsum('ij,ik->jki',a,b)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With