Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

floating point error in Ruby matrix calculation

I'm writing some code that involves finding the eigenvectors of a given matrix, and was surprised that Ruby produces some unreasonable results in simple cases.

For example, the following matrix has an eigenvector associated with eigenvalue 1:

> m = Matrix[[0r, 1/2r, 1/2r, 1/3r],
             [0r,  0r,  1/4r, 1/3r],
             [0r, 1/4r,  0r,  1/3r],
             [1r, 1/4r, 1/4r,  0r]]

Ruby finds the eigenvalues well enough, but the eigenvector explodes:

> m.eigen.eigenvalues[2]
=> 1.0000000000000009

m.eigen.eigenvectors[2]
=> Vector[5.957702309312754e+15, 5.957702309312748e+15, 5.957702309312743e+15, 5.957702309312753e+15]

The actual eigenvector should be (7, 4, 4, 9).

Isn't this troubling? If Ruby can't handle tiny matrices, then how can we trust it at all? Or am I doing something wrong?

like image 303
Théophile Avatar asked Dec 12 '17 21:12

Théophile


1 Answers

No, this is not troubling. That matrix likely just doesn't work well with that particular eigenvector algorithm implementation. Efficient and stable general eigenvector computation is nontrivial, after all.

The Matrix library is adapted from JAMA, a Java matrix package, which says it does a numerical computation and not a symbolic computation:

Not Covered. JAMA is by no means a complete linear algebra environment ... it focuses on the principle mathematical functionality required to do numerical linear algebra

The QR Algorithm: Numerical Computation

Looking at the source code for Matrix::EigenvalueDecomposition, I've found that it names the usage of the QR algorithm. I don't fully understand the intricacies of the mathematics, but I think I might understand why this computation fails. The mechanism of computation works as stated:

At the k-th step (starting with k = 0), we compute the QR decomposition Ak=QkRk ... Under certain conditions,[4] the matrices Ak converge to a triangular matrix, the Schur form of A. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved.

In "pseudo" Ruby, this conceptually means:

working_matrix = orig_matrix.dup
all_q_matrices = []
loop do
  q, r = working_matrix.qr_decomposition
  all_q_matrices << q
  next_matrix = r * q
  break if difference_between(working_matrix, next_matrix) < accuracy_threshold
end
eigenvalues = working_matrix.diagonal_values

For eigenvectors, it continues:

upon convergence, AQ = QΛ, where Λ is the diagonal matrix of eigenvalues to which A converged, and where Q is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of Q are the eigenvectors.

In "pseudo" Ruby, continued:

eigenvectors = all_q_matrices.inject(:*).columns

Floating Point Error in Numerical Computations

We can see that an iteration of numerical computations are made to compute the approximate eigenvalues, and as a side-effect, a bunch of approximate Q matrices are collected. Then, these approximated Q matrices are composed together to form the eigenvectors.

The compounding of approximations is what probably caused the wildly inaccurate results. An example of catastrophic cancellation on Math StackExchange shows a simple quadratic computation with 400% relative error. You might be able to imagine how an iterative matrix algorithm with repeated arithmetic operations could do much worse.

A Grain of Salt

Again, I don't have a deep understanding of the mathematics of the algorithm nor the implementation, so I don't know precisely what parts of the computation caused your specific 85110032990182200% error, but I hope you now can understand how it may have happened.

like image 200
Kache Avatar answered Oct 15 '22 09:10

Kache