Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Finding generalized eigenvectors numerically in Matlab

I have a matrix such as this example (my actual matrices can be much larger)

A = [-1 -2   -0.5;
      0  0.5  0;
      0  0   -1];

that has only two linearly-independent eigenvalues (the eigenvalue -1 is repeated). I would like to obtain a complete basis with generalized eigenvectors. One way I know how to do this is with Matlab's jordan function in the Symbolic Math toolbox, but I'd prefer something designed for numeric inputs (indeed, with two outputs, jordan fails for large matrices: "Error in MuPAD command: Similarity matrix too large."). I don't need the Jordan canonical form, which is notoriously unstable in numeric contexts, just a matrix of generalized eigenvectors. Is there a function or combination of functions that automates this in a numerically stable way or must one use the generic manual method (how stable is such a procedure)?

NOTE: By "generalized eigenvector," I mean a non-zero vector that can be used to augment the incomplete basis of a so-called defective matrix. I do not mean the eigenvectors that correspond to the eigenvalues obtained from solving the generalized eigenvalue problem using eig or qz (though this latter usage is quite common, I'd say that it's best avoided). Unless someone can correct me, I don't believe the two are the same.


UPDATE 1 – Five months later:

See my answer here for how to obtain generalized eigenvectors symbolically for matrices larger than 82-by-82 (the limit for my test matrix in this question).

I'm still interested in numeric schemes (or how such schemes might be unstable if they're all related to calculating the Jordan form). I don't wish to blindly implement the linear algebra 101 method that has been marked as a duplicate of this question as it's not a numerical algorithm, but rather a pencil-and paper method used to evaluate students (I suppose that it could be implemented symbolically however). If anyone can point me to either an implementation of that scheme or a numerical analysis of it, I'd be interested in that.

UPDATE 2 – Feb. 2015: All of the above is still true as tested in R2014b.

like image 401
horchler Avatar asked Sep 03 '13 20:09

horchler


People also ask

How do you find the generalized eigenvectors in Matlab?

e = eig( A , B ) returns a column vector containing the generalized eigenvalues of square matrices A and B . [ V , D ] = eig( A , B ) returns diagonal matrix D of generalized eigenvalues and full matrix V whose columns are the corresponding right eigenvectors, so that A*V = B*V*D .

How do you generalize an eigenvector?

If A is an n × n matrix and λ is an eigenvalue with algebraic multiplicity k, then the set of generalized eigenvectors for λ consists of the nonzero elements of nullspace((A − λI)k). to find generalized eigenvector v2 = (0,1,0). 4. Finally, (A − I)3 = 0, so we get v3 = (1,0,0).

How do you find numerically eigenvalues?

The eigenvalues of a Hermitian matrix are real, since (λ − λ)v = (A* − A)v = (A − A)v = 0 for a non-zero eigenvector v. If A is real, there is an orthonormal basis for Rn consisting of eigenvectors of A if and only if A is symmetric.


1 Answers

As mentioned in my comments, if your matrix is defective, but you know which eigenvectors/eigenvalue pair you want to consider as identical given your tolerance, you can proceed as with this example below:

% example matrix A:
A = [1 0 0 0 0; 
     3 1 0 0 0; 
     6 3 2 0 0; 
     10 6 3 2 0;
     15 10 6 3 2]
% Produce eigenvalues and eigenvectors (not generalized ones)
[vecs,vals] = eig(A)

This should output:

vecs =

     0         0         0         0    0.0000
     0         0         0    0.2236   -0.2236
     0         0    0.0000   -0.6708    0.6708
     0    0.0000   -0.0000    0.6708   -0.6708
1.0000   -1.0000    1.0000   -0.2236    0.2236


vals =

 2     0     0     0     0
 0     2     0     0     0
 0     0     2     0     0
 0     0     0     1     0
 0     0     0     0     1

Where we see that the first three eigenvectors are almost identical to working precision, as are the two last ones. Here, you must know the structure of your problem and identify the identical eigenvectors of identical eigenvalues. Here, eigenvalues are exactly identical, so we know which ones to consider, and we will assume that corresponding vectors 1-2-3 are identical and vectors 4-5. (In practice you will likely check the norm of the differences of eigenvectors and compare it to your tolerance)

Now we proceed to compute the generalized eigenvectors, but this is ill-conditioned to solve simply with matlab's \, because obviously (A - lambda*I) is not full rank. So we use pseudoinverses:

genvec21 = pinv(A - vals(1,1)*eye(size(A)))*vecs(:,1);
genvec22 = pinv(A - vals(1,1)*eye(size(A)))*genvec21;
genvec1 = pinv(A - vals(4,4)*eye(size(A)))*vecs(:,4);

Which should give:

genvec21 =

   -0.0000
    0.0000
   -0.0000
    0.3333
         0

genvec22 =

    0.0000
   -0.0000
    0.1111
   -0.2222
         0

genvec1 =

    0.0745
   -0.8832
    1.5317
    0.6298
   -3.5889

Which are our other generalized eigenvectors. If we now check these to obtain the jordan normal form like this:

jordanJ = [vecs(:,1) genvec21 genvec22 vecs(:,4) genvec1];
jordanJ^-1*A*jordanJ

We obtain:

ans =

2.0000    1.0000    0.0000   -0.0000   -0.0000
     0    2.0000    1.0000   -0.0000   -0.0000
     0    0.0000    2.0000    0.0000   -0.0000
     0    0.0000    0.0000    1.0000    1.0000
     0    0.0000    0.0000   -0.0000    1.0000

Which is our Jordan normal form (with working precision errors).

like image 93
reverse_engineer Avatar answered Oct 19 '22 19:10

reverse_engineer