I was working on drafting/testing a technique I devised for solving differential equations for speed and efficiency.
It would require a storing, manipulating, resizing, and (at some point) probably diagonalizing very large sparse matrices. I would like to be able to have rows consisting of zeros and a few (say <5) ones, and add them a few at a time (on the order of the number of cpus being used.)
I thought it would be useful to have gpu accelleration--so any suggestions as to the best way to take advange of that would be appreciated too (say pycuda, theano, etc.)
The most efficient storage method for symmetric sparse matrices is probably sparse skyline format (this is what Intel MKL uses, for example). AFAIK scipy.sparse
doesn't contain a sparse, symmetric matrix format. Pysparse, however, does. Using Pysparse, you can build the matrix incrementally using the link list format, then convert the matrix into sparse skyline format. Performance wise, I have usually found Pysparse to be superior to scipy with large sparse systems, and all the basic building blocks (matrix product, eigenvalue solver, direct solver, iterative solver) as present, although the range of routines is perhaps a little smaller than what is available in scipy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With