Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Eigenvectors are complex but only for large matrices

I'm trying to calculate the eigenvectors and eigenvalues of this matrix

Tridiagonal Matrix Example

import numpy as np
la = 0.02
mi = 0.08
n = 500

d1 = np.full(n, -(la+mi), np.double)
d1[0] = -la
d1[-1] = -mi
d2 = np.full(n-1, la, np.double)
d3 = np.full(n-1, mi, np.double)

A = np.diagflat(d1) + np.diagflat(d2, -1) + np.diag(d3, 1)
e_values, e_vectors = np.linalg.eig(A)

If I set the dimensions of the matrix to n < 110 the output is fine. However, if I set it to n >= 110 both the eigenvalues and the eigenvector components become complex numbers with significant imaginary parts. Why does this happen? Is it supposed to happen? It is very strange behavior and frankly I'm kind of stuck.

like image 809
hellvetica Avatar asked May 18 '20 20:05

hellvetica


People also ask

Do all complex matrices have eigenvectors?

For instance, every complex matrix has an eigenvalue. Every real matrix has an eigenvalue, but it may be complex. In fact, a field K is algebraically closed iff every matrix with entries in K has an eigenvalue. You can use the companion matrix to prove one direction.

Do complex eigenvalues have complex eigenvectors?

This is very easy to see; recall that if an eigenvalue is complex, its eigenvectors will in general be vectors with complex entries (that is, vectors in Cn, not Rn). If λ ∈ C is a complex eigenvalue of A, with a non-zero eigenvector v ∈ Cn, by definition this means: Av = λv, v = 0.

Can a complex matrix have complex eigenvalues?

As a consequence of the fundamental theorem of algebra as applied to the characteristic polynomial, we see that: Every n × n matrix has exactly n complex eigenvalues, counted with multiplicity. We can compute a corresponding (complex) eigenvector in exactly the same way as before: by row reducing the matrix A − λ I n .

Can two matrices have the same eigenvectors but different eigenvalues?

If two matrices have the same set of eigenvectors but different eigenvalues, then they can be simultaneously diagonalized, which means that the two matrices commute which each other, that is if the two matrices are A and B, AB = BA.


Video Answer


1 Answers

What you are seeing appears to be fairly normal roundoff error. This is an unfortunate result of storing floating point numbers with a finite precision. It naturally gets relatively worse for large problems. Here is a plot of the real vs. imaginary components of the eigenvalues:

enter image description here

You can see that the imaginary numbers are effectively noise. This is not to say that they are not important. Here is a plot of the imaginary vs. real part, showing that the ratio can get as large as 0.06 in the worst case:

enter image description here

This ratio changes with respect to the absolute and relative quantities la and mi. If you multiply both by 10, you get

enter image description here

If you keep la = 0.02 and set mi = 0.8, you get a smaller imaginary part:

enter image description here

Things get really weird when you do the opposite, and increase la by a factor of 10, keeping mi as-is:

enter image description here

The relative precision of the calculation decreases for smaller eigenvalues, so this is not too surprising.

Given the relatively small magnitudes of the imaginary parts (at least for the important eigenvalues), you can either take the magnitude or the real part of the result since you know that all the eigenvalues are real.

like image 190
Mad Physicist Avatar answered Oct 28 '22 11:10

Mad Physicist