Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

The fastest way to calculate eigenvalues of large matrices

Until now I used numpy.linalg.eigvals to calculate the eigenvalues of quadratic matrices with at least 1000 rows/columns and, for most cases, about a fifth of its entries non-zero (I don't know if that should be considered a sparse matrix). I found another topic indicating that scipy can possibly do a better job.

However, since I have to calculate the eigenvalues for hundreds of thousands of large matrices of increasing size (possibly up to 20000 rows/columns and yes, I need ALL of their eigenvalues), this will always take awfully long. If I can speed things up, even just the tiniest bit, it would most likely be worth the effort.

So my question is: Is there a faster way to calculate the eigenvalues when not restricting myself to python?

like image 644
alice Avatar asked Jul 04 '13 10:07

alice


People also ask

What is the fastest way to find eigenvalues?

How do you determine the Eigenvalues of a square matrix A? We use the equation det(A – λI) = 0 and solve for λ. Calculate all the possible values of λ, which are the required eigenvalues of matrix A.

What is the shortcut to find eigenvalues of a matrix?

Solution. To find the eigenvalues, we use the shortcut. The sum of the eigenvalues is the trace of A, that is, 1 + 4 = 5. The product of the eigenvalues is the determinant of A, that is, 1 · 4 − (−1) · 2 = 6, from which the eigenvalues are 2 and 3.

Which of the following methods is used to find the numerically largest eigen value of a matrix?

If it is unsymmetrical apply LR or QR method. If you are interested in largest or smallest in magnitude eigenvalues then apply Power's method.


2 Answers

@HighPerformanceMark is correct in the comments, in that the algorithms behind numpy (LAPACK and the like) are some of the best, but perhaps not state of the art, numerical algorithms out there for diagonalizing full matrices. However, you can substantially speed things up if you have:

Sparse matrices

If your matrix is sparse, i.e. the number of filled entries is k, is such that k<<N**2 then you should look at scipy.sparse.

Banded matrices

There are numerous algorithms for working with matrices of a specific banded structure. Check out the solvers in scipy.linalg.solve.banded.

Largest Eigenvalues

Most of the time, you don't really need all of the eigenvalues. In fact, most of the physical information comes from the largest eigenvalues and the rest are simply high frequency oscillations that are only transient. In that case you should look into eigenvalue solutions that quickly converge to those largest eigenvalues/vectors such as the Lanczos algorithm.

like image 200
Hooked Avatar answered Sep 24 '22 13:09

Hooked


An easy way to maybe get a decent speedup with no code changes (especially on a many-core machine) is to link numpy to a faster linear algebra library, like MKL, ACML, or OpenBLAS. If you're associated with an academic institution, the excellent Anaconda python distribution will let you easily link to MKL for free; otherwise, you can shell out $30 (in which case you should try the 30-day trial of the optimizations first) or do it yourself (a mildly annoying process but definitely doable).

I'd definitely try a sparse eigenvalue solver as well, though.

like image 30
Danica Avatar answered Sep 26 '22 13:09

Danica