Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Eigen Linear Solver for very small square matrix

I am using Eigen on a C++ program for solving linear equation for very small square matrix(4X4).

My test code is like

template<template <typename MatrixType> typename EigenSolver>
Vertor3d solve(){
   //Solve Ax = b and A is a real symmetric matrix and positive semidefinite

   ... // Construct 4X4 square matrix A and 4X1 vector b

   EigenSolver<Matrix4d> solver(A);
   auto x = solver.solve(b);

   ... // Compute relative error for validating
}

I test some EigenSolver which include:

  1. FullPixLU
  2. PartialPivLU
  3. HouseholderQR
  4. ColPivHouseholderQR
  5. ColPivHouseholderQR
  6. CompleteOrthogonalDecomposition
  7. LDLT
  8. Direct Inverse

Direct Inverse is:

template<typename MatrixType>
struct InverseSolve
{
private:
    MatrixType  inv;
public:
    InverseSolve(const MatrixType &matrix) :inv(matrix.inverse()) {
    }
    template<typename VectorType>
    auto solve(const VectorType & b) {
        return inv * b;
    }
};

I found that the fast method is DirectInverse,Even If I linked Eigen with MKL , the result was not change.

This is the test result

  1. FullPixLU : 477 ms
  2. PartialPivLU : 468 ms
  3. HouseholderQR : 849 ms
  4. ColPivHouseholderQR : 766 ms
  5. ColPivHouseholderQR : 857 ms
  6. CompleteOrthogonalDecomposition : 832 ms
  7. LDLT : 477 ms
  8. Direct Inverse : 88 ms

which all use 1000000 matrices with random double from uniform distribution [0,100].I fristly construct upper-triangle and then copy to lower-triangle.

The only problem of DirectInverse is that its relative error slightly larger than other solver but acceptble.

Is there any faster or more felegant solution for my program?Is DirectInverse the fast solution for my program?

DirectInverse does not use the symmetric infomation so why is DirectInverse far faster than LDLT?

like image 473
LiShaoyuan Avatar asked Jun 18 '18 12:06

LiShaoyuan


1 Answers

Despite what many people suggest of never explicitly computing an inverse when you only want to solve a linear system, for very small matrices this can actually be beneficial, since there are closed-form solutions using co-factors.

All other alternatives you tested will be slower, since they will do pivoting (which implies branching), even for small fixed-sized matrices. Also, most of them will result in more divisions and be not vectorizable as good, as the direct computation.

To increase the accuracy (this technique can actually be used independent of the solver if required), you can refine an initial solution by solving the system again with the residual:

Eigen::Vector4d solveDirect(const Eigen::Matrix4d& A, const Eigen::Vector4d& b)
{
    Eigen::Matrix4d inv = A.inverse();
    Eigen::Vector4d x = inv * b;
    x += inv*(b-A*x);
    return x;
}

I don't think Eigen directly provides a way to exploit the symmetry of A here (for the directly computed inverse). You can try hinting that by explicitly copying a selfadjoint view of A into a temporary and hope that the compiler is smart enough to find common sub-expressions:

Eigen::Matrix4d tmp = A.selfadjointView<Eigen::Upper>();
Eigen::Matrix4d inv = tmp.inverse();

To reduce some divisions, you can also compile with -freciprocal-math (on gcc or clang), this will slightly reduce accuracy of course.

If this is really performance critical, try implementing a hand-tuned inverse_4x4_symmetric method. Exploiting the symmetry of inv * b will unlikely be beneficial for such small matrices.

like image 110
chtz Avatar answered Sep 29 '22 12:09

chtz