I'm trying to find eigenvalues for a family of matrices (Hamiltonian with perturbation). Matrix is equal to sum of diagonal matrix "mat1", and another matrix "mat2" multiplied by a parameter
matrix=mat1+g*mat2
and I'm trying to find eigenvalues depending on parameter g. I need n- lowest eigenvalues - not with lowest magnitude, just lowest. All the charts show g on horizontal axis, and Eigenvalue on vertical. I am using Eigen with spectra, and tried few different methods. First: Eigen SelfAdjointEigenSolver, for dense matrix. It gives the best results, since it calculates ALL the eigenvalues I can always get the lowest one with no problem. The only problem is when different curves of Eigenvalues cross, they swap places since Eigen sorts values, but I don't think there is anything to be fixed with that. Calculating all the eigenvalues is for dense matrix is however very time consuming and for bigger matrices I can't use it.

Then, I tried using Spectra SymEigsShiftSolver. It however finds eigenvalues closest to set value, which creates problem when some Eigenvalues get very negative. On chart it looks like curves swap places. It is also not very good. I tried setting the shift value to be negative, it kind of works but demands that I know the lowest Eigenvalue (which I do not) and generates weird effects in upper levels (Which will be explained in the next method).

Finally, I tried using Spectra GenEigsSolver, setting sorting rule to "SortRule::SmallestReal". It generates situations where some eigenvalues swap places, some are not detected at all, so not one level is calculated properly.

Is there any way to calculate lowest Eigenvalues properly and quickly, and fix these swaps and detection problems? Simply sorting eigenvalues won't help, since sometimest one of them is simply not detected at all, so generated values are completely out of sync.
For these demonstrations I used simple diagonal matrix with sequence from 0 to 100 on diagonal, and another symmetric full of random numbers from -1 to 1. The code used to generate the charts:
#include<iostream>
#include <Eigen/Sparse>
#include <Spectra/SymEigsShiftSolver.h>
#include <Spectra/MatOp/SparseSymShiftSolve.h>
#include <chrono>
#include <cstdlib>
#include<fstream>
#include <Eigen/Dense>
#include <Spectra/GenEigsSolver.h>
#include <Spectra/MatOp/SparseGenMatProd.h>
using Eigen::MatrixXd;
using Eigen::SelfAdjointEigenSolver;
using namespace Spectra;
using namespace Eigen;
using namespace std;
int main()
{
int N=100;
//MatrixXd mat0(28,28);
SparseMatrix<double>mat1(N,N);
SparseMatrix<double>mat2(N,N);
for(int i=0;i<N;i++)
{
mat1.insert(i,i)=i;
for(int j=i;j<N;j++)
{
double r=rand()/double(RAND_MAX)*2.0-1;
mat2.insert(i,j)=r;
if(i!=j)mat2.insert(j,i)=r;
}
}
cout<<mat1<<"\n\n"<<mat2<<"\n";
//Calculating ALL Eigenvalues from dense matrix using SelfAdjointEigenSolver
ofstream Energies("output1.csv");
SelfAdjointEigenSolver<MatrixXd> es;
for(double g=-10;g<10;g+=0.1)
{cout<<g<<" \n";
MatrixXd matrix=mat1+g*mat2;
es.compute(matrix);
Energies<<g<<"\t";
for(int i=0;i<es.eigenvalues().rows();i++)
{
Energies<<es.eigenvalues()[i]<<"\t";
}
Energies<<"\n";
Energies.flush();
}
Energies.close();
//Calculating first 20 Eigenvalues from sparse matrix using SymEigsShiftSolver
Energies.open("output2.csv");
for(double g=-10;g<10;g+=0.1)
{
cout<<g<<" \n";
SparseSymShiftSolve<double> op(mat1+g*mat2);
SymEigsShiftSolver<SparseSymShiftSolve<double>> eig(op,20,50,0.0);
eig.init();
cout<<eig.compute()<<"\n";
VectorXd evalues;
if(eig.info() == CompInfo::Successful)
evalues = eig.eigenvalues();
int N=evalues.rows();
Energies<<g<<"\t";
for(int i=0;i<N;i++)
{
Energies<<evalues(i)<<"\t";
}
Energies<<"\n";
Energies.flush();
}
Energies.close();
//Calculating first 20 Eigenvalues from dense matrix using GenEigSolver
Energies.open("output3.csv");
for(double g=-10;g<10;g+=0.1)
{
cout<<g<<" \n";
SparseMatrix<double> matrix=mat1+g*mat2;
SparseGenMatProd<double> op(matrix);
GenEigsSolver<SparseGenMatProd<double>> eig(op, 20, 50);
eig.init();
cout<<eig.compute(SortRule::SmallestReal )<<"\n";
VectorXd evalues;
if(eig.info() == CompInfo::Successful)
evalues = eig.eigenvalues().real();
int N=evalues.rows();
//cout<<es.eigenvalues().transpose()<<"\n";
Energies<<g<<"\t";
for(int i=0;i<N;i++)
{
Energies<<evalues(N-i-1)<<" ";
}
Energies<<"\n";
Energies.flush();
}
Energies.close();
}
The third variant you have (using GenEigsSolver<SparseGenMatProd<double>>) is the closest you want, but it has an issue: the region where you are having trouble has at least some of the eigenvalues you are looking for close to 0, and in that case the SortRule::SmallestReal mode doesn't work very well for iterative algorithms.
This is due to how iterative algorithms (as implemented in the spectra library) work. To simplify quite a bit, when looking for the smallest / largest real eigenvalue, they will apply the matrix repeatedly to random initial test vectors. If you consider that mathematically a test vector can be decomposed into the eigenbasis of that matrix in the form v = e1 * b1 + e2 * b2 + ... (v being the random initial test vector, eN the Nth eigenvalue and bN the Nth basis vector of the matrix), then applying the matrix repeatedly will result in A^N * v = e1^N * b1 + e2^N * b2 + e3^N * b3 + ... - do that often enough, and the e1^N term will dominate if e1 is the largest eigenvalue. (Negative eigenvalues work similarly.) This is how iterative solvers are able to find just the highest / lowest eigenvalues - but this method breaks down if abs(eN) is smaller than 1. In reality the actually implemented algorithms are not quite so simple as described here, but they rely on this basic principle, which is why this way of thinking about them is a useful model. (For finding the lowest magnitude eigenvalues a different variation of this method is used.) Hope that helps to explain why the algorithms you are using are running into trouble in the region you describe.
What you could do is to first calculate the maximum coefficient of the entire sparse matrix, and then shift the matrix down by that coefficient, so that only the eigenvalues with the largest real part will be 0, but all of the rest will be much more negative. Most notably, the eigenvalues you want will definitely now be the ones furthest away from zero, making the entire iterative algorithm much more stable.
Example modification to your example code:
SparseMatrix<double>matId(N,N); // NEW
matId.setIdentity(); // NEW
// ...
SparseMatrix<double> matrix=mat1+g*mat2;
double c = std::abs(matrix.coeffs().maxCoeff()); // NEW
// (you could also estimate a global c irrespective of g,
// as long as it's always big enough)
matrix -= c * matId; // NEW
SparseGenMatProd<double> op(matrix);
GenEigsSolver<SparseGenMatProd<double>> eig(op, 20, 50);
eig.init();
cout<<eig.compute(SortRule::SmallestReal )<<"\n";
VectorXd evalues;
if(eig.info() == CompInfo::Successful)
evalues = eig.eigenvalues().real();
int N=evalues.rows();
evalues.array() += c; // NEW
(I don't know if there's a syntactically easier way to subtract a multiple of the identity matrix a sparse matrix in Eigen, this is why the code is a bit clunky.)
This trick with shifting the matrix only works well if your matrix is well-behaved enough (your example matrices definitely were). To give an extreme example: if the largest coefficient is 10^16, but other coefficients typically are of the order of 1, then this will miserably fail due to the lack of precision. To be fair, in that extreme of an example, any matrix decomposition algorithm using double precision will likely fail anyway, but I'm just saying that the larger the largest coefficient of your matrix is relative to the typical order of magnitude of coefficients in your matrix, the worse the resulting precision will be in general, and this trick will make that effect worse.
Irrespective of that discussion, since you know your matrix is symmetrical, a further improvement would be to use SymEigsSolver<SparseSymMatProd<double>> with SortRule::SmallestAlge instead of your current code, that should improve performance (especially since you won't have complex eigenvalues).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With