Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Algorithm for quadratic form matrix multiplication with sparse Matrix

I am optimizing code which heavily relies on a custom made Matrix library, (which won't be excluded from the project because it is everywhere. This is not nice, but it's a fact...) Many calculations are done with matrices of 10-20 rows and columns, many computations include a quadratic form like

C = A*B*A'

I realized that often A is sparse and I would like to make use of this fact. So I am looking for an algorithm that would handle this case. Numerical stability is important. Is there anything I can use? (I didn't write our library so I don't know if there are any pitfalls I should take into account?)

As "our" simple O(n³) multiplication method executes faster than Eigen 3 on the target platform, as I need numerical stability and the matrices aren't very big, I guess that Strassen's algorithm as well as Coppersmith–Winograd algorithm aren't what I am looking for. Instead it's just the quadratic form multiplication in a way that lets me easily check for zeros in A.

like image 856
Philipp Avatar asked Dec 15 '11 08:12

Philipp


3 Answers

There exists this paper, dealing with fast sparse matrix multiplication. The developed algorithm divides the sparse matrix into two parts: A dense and a sparse and applies a fast multiplication algorithm on it. So for me it looks like that it does not dependson the size of the matrix, like you mentioned with respect to Strassen, but of the fact that it is sparse.

like image 134
Sebastian Dressler Avatar answered Nov 02 '22 09:11

Sebastian Dressler


There are ways to implement a sparse matrix in ways that make in more condensed than a dense matrix. One way that I do it is as follows:

[0 0 0 0 0]
[0 1 2 0 9]
[0 0 0 2 0]
[0 1 0 0 0]

becomes a linear array of non-zero elements

typedef struct {
    int row;
    int col;
    double entry;
} Element;

typedef SparseMatrix Element*;

So the matrix now has a space complexity of O(n) instead of O(n^2) For A*B, where A and B are matrices, you only need to traverse each array for matching elements (i.e. a->row == b->col && a->col == b->row ), and possibly add several together (inner product). This algorithm would be of O(n^2) complexity rather than O(n^3). This is because you can skip the frivolous operation of taking an inner product that would result in zero.

like image 27
rurouniwallace Avatar answered Nov 02 '22 10:11

rurouniwallace


check out Aydın Buluc, John R. Gilbert: Highly Parallel Sparse Matrix-Matrix Multiplication

like image 1
Rohit Vipin Mathews Avatar answered Nov 02 '22 10:11

Rohit Vipin Mathews