I am trying to compute the eigenvalues, λ (lambda
), of a damped structure with the following equations of motion:
(λ²M + λC + K) x = 0,
where M, C, and K are sparse matrices. Using MATLAB's polyeig
function works, but I would like to go to larger systems and take advantage of the sparsity of my matrices. I have used a state space linearization to obtain a generalized eigenvalue problem as follows:
(A - λB) z = 0,
with
A = [K , 0 ; 0, -M],
B = [-C , -M ; -M, 0],
z = [x ; λx]
Solving this with MATLAB's eigs
function:
lambda = eigs(A,B,10,'sm')
Produces the following output:
lambda =
1.0e+03 *
-0.2518 - 1.3138i
-0.2518 + 1.3138i
-0.4690 - 1.7360i
-0.4690 + 1.7360i
-0.4690 - 1.7360i
-0.4690 + 1.7360i
-0.5387 - 1.8352i
-0.5387 + 1.8352i
NaN + NaNi
NaN + NaNi
The first eight eigenvalues are correct, but it seems as though the last two eigenvalues were not able to converge. Increasing the number of Lanczos basis vectors does not seem to improve the problem.
Strangely however, increasing the number of eigenvalues computed (k
) allows more and more eigenvalues to converge:
k = 10
: Number of lambdas converged = 8k = 20
: Number of lambdas converged = 8k = 50
: Number of lambdas converged = 8k = 100
: Number of lambdas converged = 20k = 120
: Number of lambdas converged = 80k = 150
: Number of lambdas converged = 150It may also be worth mentioning that many of the eigenvalues that do not converge with lower values of k
appear to be degenerate or at least very closely spaced.
I was wondering if anybody can think of an explanation for this behavior? If so, is there any way to make all of the eigenvalues converge without making k
very large? Thank you!
d = eigs( A , k ) returns the k largest magnitude eigenvalues. d = eigs( A , k , sigma ) returns k eigenvalues based on the value of sigma . For example, eigs(A,k,'smallestabs') returns the k smallest magnitude eigenvalues.
The form and normalization of W depends on the combination of input arguments: [V,D,W] = eig(A) returns matrix W , whose columns are the left eigenvectors of A such that W'*A = D*W' . The eigenvectors in W are normalized so that the 2-norm of each is 1. If A is symmetric, then W is the same as V .
Eigenvalues are the special set of scalar values that is associated with the set of linear equations most probably in the matrix equations. The eigenvectors are also termed as characteristic roots. It is a non-zero vector that can be changed at most by its scalar factor after the application of linear transformations.
This is old, but still unanswered. Without the actual matrices it is difficult to be certain. This is my best guess:
eigs
calls ARPACK routines. ARPACK exploits iterative methods (Arnoldi) to converge to, e.g., the eigenvalues with smallest magnitude (option sm
). As for any iterative method, the user can specify options like the convergence Tolerance
and the MaxIterations
before the iterative process stops. The NaN
s indicate eigenvalues that have not converged when MaxIterations
is reached.
An important option for Arnoldi methods is the dimension of the Krylov subspace used to approximate the solutions. This can be specified by the option SubspaceDimension
in eigs
. The default value is max(2*k,20), so increasing k
effectively increases the dimension of the Krylov subspace. If your problem requires a relatively large Krylov subspace for converging some eigenvalues to the desired Tolerance
, this could explain why increasing k
yields convergence of a larger number of eigenvalues.
To verify if my guess is correct you could either have a less restrictive Tolerance
(e-6
may be sufficient?) or increase the value of SubspaceDimension
while keeping k
constant.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With