I want to solve Ax = b
where A
is a very large square positive definite symmetric block matrix and x
and b
are vectors. When I say large I mean for a nxn
matrix an n
as large as 300,000
.
Here is an example of a much smaller but representative matrix I want to solve.
And here is the same matrix zoomed in showing that it is composed of blocks of dense matricies.
I previously (see here, here, and as far back as here) used Eigen's Cholesky solver which worked fine for n<10000
but with with n=300000
the Cholesky solver is too slow. However, I did not take advantage of the fact the I have a block matrix. Apparently, there exists algorithms for solving sparse block matrices (e.g. block cholesky factorization).
I would like to know specifically if Eigen has optimized algorithms, using factorization or iterative methods, for sparse dense block matrices which I can employ?
Also can you suggest other algorithms which might be ideal to solve my matrix? I mean as far as I understand at least for factorization finding a permutation matrix is NP hard so many different heuristic methods exist and as far as I can tell people build up an intuition of different matrix structures (e.g. banded matrix) and what algorithms work best with them. I don't have this intuition yet.
I am currently looking into using a conjugate gradient method. I have implemented this myself in C++ but still not taking advantage of the fact that the matrix is a block matrix.
//solve A*rk = xk
//Eigen::SparseMatrix<double> A;
//Eigen::VectorXd rk(n);
Eigen::VectorXd pk = rk;
double rsold = rk.dot(rk);
int maxiter = rk.size();
for (int k = 0; k < maxiter; k++) {
Eigen::VectorXd Ap = A*pk;
double ak = rsold /pk.dot(Ap);
xk += ak*pk, rk += -ak*Ap;
double rsnew = rk.dot(rk);
double xlength = sqrt(xk.dot(xk));
//relaxing tolerance when x is large
if (sqrt(rsnew) < std::max(std::min(tolerance * 10000, tolerance * 10 * xlength), tolerance)) {
rsold = rsnew;
break;
}
double bk = rsnew / rsold;
pk *= bk;
pk += rk;
rsold = rsnew;
}
Sparse-matrix dense-matrix multiplication (SpMM) is a fundamental linear algebra operation and a building block for more complex algorithms such as finding the solutions of linear systems, computing eigenvalues through the preconditioned conjugate gradient, and multiple right-hand sides Krylov subspace iterative ...
IA[i] = IA[i-1] + no of non-zero elements in the (i-1) th row of the Matrix.
The Sparse Solvers library in the Accelerate framework handles the solution of systems of equations where the coefficient matrix is sparse. That is, most of the entries in the matrix are zero. The Sparse Solvers library provides a sparse counterpart to the dense factorizations and linear solvers that LAPACK provides.
Sparse matrices can be useful for computing large-scale applications that dense matrices cannot handle. One such application involves solving partial differential equations by using the finite element method. The finite element method is one method of solving partial differential equations (PDEs).
check out SuiteSparse, http://faculty.cse.tamu.edu/davis/suitesparse.html The author, Tim Davis, is famous in the field of sparse matrices. He also received an award from Google for code quality. Performance of his code is also outstanding.
Cheers
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With