What I'm looking for: a way to implement in Python a special multiplication operation for matrices that happen to be in scipy sparse (csr) format. This is a special kind of multiplication, not matrix multiplication nor Kronecker multiplication nor Hadamard aka pointwise multiplication, and does not seem to have any built-in support in scipy.sparse.
The desired operation: Each row of the output should contain the results of every product of the elements of the corresponding rows in the two input matrices. So starting with two identically sized matrices, each with dimensions m by n, the result should have dimensions m by n^2.
It looks like this:
Python code:
import scipy.sparse
A = scipy.sparse.csr_matrix(np.array([[1,2],[3,4]]))
B = scipy.sparse.csr_matrix(np.array([[0,5],[6,7]]))
# C is the desired product of A and B. It should look like:
C = scipy.sparse.csr_matrix(np.array([[0,5,0,10],[18,21,24,28]]))
What would be a nice or efficient way to do this? I've tried looking here on stackoverflow as well as elsewhere, with no luck so far. So far it sounds like my best bet is to do a row by row operation in a for loop, but that sounds horrendous seeing as my input matrices have a few million rows and few thousand columns, mostly sparse.
In your example, C
is the first and last row of kron
In [4]: A=np.array([[1,2],[3,4]])
In [5]: B=np.array([[0,5],[6,7]])
In [6]: np.kron(A,B)
Out[6]:
array([[ 0, 5, 0, 10],
[ 6, 7, 12, 14],
[ 0, 15, 0, 20],
[18, 21, 24, 28]])
In [7]: np.kron(A,B)[[0,3],:]
Out[7]:
array([[ 0, 5, 0, 10],
[18, 21, 24, 28]])
kron
contains the same values as np.outer
, but they are in a different order.
For large dense arrays, einsum
might provide good speed:
np.einsum('ij,ik->ijk',A,B).reshape(A.shape[0],-1)
sparse.kron
does the same thing as the np.kron
:
As = sparse.csr_matrix(A); Bs ...
sparse.kron(As,Bs).tocsr()[[0,3],:].A
sparse.kron
is written in Python, so you probably could modify it if it is doing unnecessary calculations.
An iterative solution appears to be:
sparse.vstack([sparse.kron(a,b) for a,b in zip(As,Bs)]).A
Being iterative I don't expect it to be faster than paring down the full kron
. But short of digging into the logic of sparse.kron
it is probably the best I can do.
vstack
uses bmat
, so the calculation is:
sparse.bmat([[sparse.kron(a,b)] for a,b in zip(As,Bs)])
But bmat
is rather complex, so it won't be easy to simplify this further.
The np.einsum
solution can't be easily extended to sparse - there isn't a sparse.einsum
, and the intermediate product is 3d, which sparse does not handle.
sparse.kron
uses coo
format, which is no good for working with the rows. But working in the spirit of that function, I've worked out a function that iterates on the rows of csr
format matrices. Like kron
and bmat
I'm constructing the data
, row
, col
arrays, and constructing a coo_matrix
from those. That in turn can be converted to other formats.
def test_iter(A, B):
m,n1 = A.shape
n2 = B.shape[1]
Cshape = (m, n1*n2)
data = np.empty((m,),dtype=object)
col = np.empty((m,),dtype=object)
row = np.empty((m,),dtype=object)
for i,(a,b) in enumerate(zip(A, B)):
data[i] = np.outer(a.data, b.data).flatten()
#col1 = a.indices * np.arange(1,a.nnz+1) # wrong when a isn't dense
col1 = a.indices * n2 # correction
col[i] = (col1[:,None]+b.indices).flatten()
row[i] = np.full((a.nnz*b.nnz,), i)
data = np.concatenate(data)
col = np.concatenate(col)
row = np.concatenate(row)
return sparse.coo_matrix((data,(row,col)),shape=Cshape)
With these small 2x2 matrices, as well as larger ones (e.g. A1=sparse.rand(1000,2000).tocsr()
), this is about 3x faster than the version using bmat
. For large enough matrices it is better than dense einsum
version (which can have memory errors).
A non-optimal way to do it is to kron separately for each row:
def my_mult(A, B):
nrows = A.shape[0]
prodrows = []
for i in xrange(0, nrows):
Arow = A.getrow(i)
Brow = B.getrow(i)
prodrow = scipy.sparse.kron(Arow,Brow)
prodrows.append(prodrow)
return scipy.sparse.vstack(prodrows)
This is approx 3x worse in performance than @hpaulj's solution here, as can be seen by running the following code:
A=scipy.sparse.rand(20000,1000, density=0.05).tocsr()
B=scipy.sparse.rand(20000,1000, density=0.05).tocsr()
# Check memory
%memit C1 = test_iter(A,B)
%memit C2 = my_mult(A,B)
# Check time
%timeit C1 = test_iter(A,B)
%timeit C2 = my_mult(A,B)
# Last but not least, check correctness!
print (C1 - C2).nnz == 0
Results:
hpaulj's method:
peak memory: 1993.93 MiB, increment: 1883.80 MiB
1 loops, best of 3: 6.42 s per loop
this method:
peak memory: 2456.75 MiB, increment: 1558.78 MiB
1 loops, best of 3: 18.9 s per loop
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With