Which algorithm you would recommend for fast solution of dense linear system of fixed dimension (N=9) (matrix is symmetric, positive-semidefinite)?
Types are 32 and 64 bits floating points.
Such systems will be solved millions of times, so algorithm should be rather fast with respect to dimension (n=9).
P.S. examples of robust C++ implementations for proposed algorithm are appreciated.
1) What do you mean by "solved million of times"? Same coefficient matrix with a million of different right hand terms, or a million of distinct matrices?
Million of distinct matrices.
2) Positive _semi_definite means that matrix can be singular (to machine precision). How would you like to deal with this case? Just raise an error, or try to return some sensible answer?
Raising error is OK.
The matrix being symmetric, positive-semidefinite, the Cholesky decomposition is strictly superior to the LU decomposition. (roughly twice faster than LU, whatever the size of the matrix. Source : "Numerical Linear Algebra" by Trefethen and Bau)
It is also de facto the standard for small dense matrices (source : I do a PhD in computational mathematics) Iterative methods are less efficient than direct methods unless the system becomes large enough (quick rule of thumb that means nothing, but that is always nice to have : on any modern computer, any matrix smaller than 100*100 is definitely a small matrix that needs direct methods, rather than iterative ones)
Now, I do not recommend to do it yourself. There are tons of good libraries out there that have been thoroughly tested. But if I have to recommend you one, it would be Eigen :
By the way, here in the documentation, you have the various pros and cons of their 7 direct linear solvers in a nice, concise table. It seems that in your case, LDLT (a variation of Cholesky) wins
Generally, one is best off using an existing library, rather than a roll-your-own approach, as there are many tedious details to attend to in pursuit of a fast, stable numerical implementation.
Here's a few to get you started:
Eigen library (my personal preference):
http://eigen.tuxfamily.org/dox/QuickRefPage.html#QuickRef_Headers
Armadillo: http://arma.sourceforge.net/
Search around and you'll find plenty of others.
I would recommend LU decomposition, especially if "solved millions of times" really means "solved once and applied to millions of vectors". You'll create the LU decomposition, save it, and apply forward-back substitution against as many r.h.s. vectors as you wish.
It's more stable in the face of roundoff if you use pivoting.
LU for a symmetric semi-definite matrix does not make much sense: you destroy a nice property of your input data performing unnecessary operations.
Choice between LLT or LDLT really depends on the condition number of your matrices, and how you intend to treat edge cases. LDLT should be used only if you can prove a statistically significant improve in accuracy, or if robustness is of paramount importance to your app.
(Without a sample of your matrices it is hard to give sound advice, but I suspect that with such a small order N=9, pivoting the small diagonal terms toward the bottom part of D is really not necessary. So I would start with classical Cholesky and simply abort factorization if the diag terms become to small with respect to some sensibly chosen tolerance.)
Cholesky is pretty simple to code, and if you strive for a really fast code, it is better to implement it yourself.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With