So, I'm doing some Kmeans classification using numpy arrays that are quite sparse-- lots and lots of zeroes. I figured that I'd use scipy's 'sparse' package to reduce the storage overhead, but I'm a little confused about how to create arrays, not matrices.
I've gone through this tutorial on how to create sparse matrices: http://www.scipy.org/SciPy_Tutorial#head-c60163f2fd2bab79edd94be43682414f18b90df7
To mimic an array, I just create a 1xN matrix, but as you may guess, Asp.dot(Bsp) doesn't quite work because you can't multiply two 1xN matrices. I'd have to transpose each array to Nx1, and that's pretty lame, since I'd be doing it for every dot-product calculation.
Next up, I tried to create an NxN matrix where column 1 == row 1 (such that you can multiply two matrices and just take the top-left corner as the dot product), but that turned out to be really inefficient.
I'd love to use scipy's sparse package as a magic replacement for numpy's array(), but as yet, I'm not really sure what to do.
Any advice?
An example of a Sparse matrix. Sparse matrices contain only a few non-zero values. Storing such data in a two-dimensional matrix data structure is a waste of space. Also, it is computationally expensive to represent and work with sparse matrices as though they are dense.
SciPy provides tools for creating sparse matrices using multiple data structures, as well as tools for converting a dense matrix to a sparse matrix. Many linear algebra NumPy and SciPy functions that operate on NumPy arrays can transparently operate on SciPy sparse arrays.
The function csr_matrix() is used to create a sparse matrix of compressed sparse row format whereas csc_matrix() is used to create a sparse matrix of compressed sparse column format.
Compressed sparse row (CSR) and compressed sparse column (CSC) are widely known and most used formats. Mainly, they are used for write-once-read-many tasks. Internally, CSR is based on three numpy arrays: data is an array which contains all non-zero entries in the row-major order.
Use a scipy.sparse
format that is row or column based: csc_matrix
and csr_matrix
.
These use efficient, C implementations under the hood (including multiplication), and transposition is a no-op (esp. if you call transpose(copy=False)
), just like with numpy arrays.
EDIT: some timings via ipython:
import numpy, scipy.sparse n = 100000 x = (numpy.random.rand(n) * 2).astype(int).astype(float) # 50% sparse vector x_csr = scipy.sparse.csr_matrix(x) x_dok = scipy.sparse.dok_matrix(x.reshape(x_csr.shape))
Now x_csr
and x_dok
are 50% sparse:
print repr(x_csr) <1x100000 sparse matrix of type '<type 'numpy.float64'>' with 49757 stored elements in Compressed Sparse Row format>
And the timings:
timeit numpy.dot(x, x) 10000 loops, best of 3: 123 us per loop timeit x_dok * x_dok.T 1 loops, best of 3: 1.73 s per loop timeit x_csr.multiply(x_csr).sum() 1000 loops, best of 3: 1.64 ms per loop timeit x_csr * x_csr.T 100 loops, best of 3: 3.62 ms per loop
So it looks like I told a lie. Transposition is very cheap, but there is no efficient C implementation of csr * csc (in the latest scipy 0.9.0). A new csr object is constructed in each call :-(
As a hack (though scipy is relatively stable these days), you can do the dot product directly on the sparse data:
timeit numpy.dot(x_csr.data, x_csr.data) 10000 loops, best of 3: 62.9 us per loop
Note this last approach does a numpy dense multiplication again. The sparsity is 50%, so it's actually faster than dot(x, x)
by a factor of 2.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With