Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

scipy eigh gives negative eigenvalues for positive semidefinite matrix

I am having some issues with scipy's eigh function returning negative eigenvalues for positive semidefinite matrices. Below is a MWE.

The hess_R function returns a positive semidefinite matrix (it is the sum of a rank one matrix and a diagonal matrix, both with nonnegative entries).

import numpy as np
from scipy import linalg as LA

def hess_R(x):
    d = len(x)
    H = np.ones(d*d).reshape(d,d) / (1 - np.sum(x))**2
    H = H + np.diag(1 / (x**2))
    return H.astype(np.float64)

x = np.array([  9.98510710e-02 ,  9.00148922e-01 ,  4.41547488e-10])
H = hess_R(x)
w,v = LA.eigh(H)
print w

The eigenvalues printed are

[ -6.74055241e-271   4.62855397e+016   5.15260753e+018]

If I replace np.float64 with np.float32 in the return statement of hess_R I get

[ -5.42905303e+10   4.62854925e+16   5.15260506e+18]

instead, so I am guessing this is some sort of precision issue.

Is there a way to fix this? Technically I do not need to use eigh, but I think this is the underlying problem with my other errors (taking square roots of these matrices, getting NaNs, etc.)

like image 939
angryavian Avatar asked Apr 24 '16 05:04

angryavian


1 Answers

I think the issue is that you've hit the limits of floating-point precision. A good rule-of-thumb for linear algebra results is that they're good to about one part in 10^8 for float32, and about one part in 10^16 for float 64. It appears that the ratio of your smallest to largest eigenvalue here is less than 10^-16. Because of this, the returned value cannot really be trusted and will depend on the details of the eigenvalue implementation you use.

For example, here are four different solvers you should have available; take a look at their results:

# using the 64-bit version
for impl in [np.linalg.eig, np.linalg.eigh, LA.eig, LA.eigh]:
    w, v = impl(H)
    print(np.sort(w))
    reconstructed = np.dot(v * w, v.conj().T)
    print("Allclose:", np.allclose(reconstructed, H), '\n')

Output:

[ -3.01441754e+02   4.62855397e+16   5.15260753e+18]
Allclose: True 

[  3.66099625e+02   4.62855397e+16   5.15260753e+18]
Allclose: True 

[ -3.01441754e+02+0.j   4.62855397e+16+0.j   5.15260753e+18+0.j]
Allclose: True 

[  3.83999999e+02   4.62855397e+16   5.15260753e+18]
Allclose: True 

Notice they all agree on the larger two eigenvalues, but that the value of the smallest eigenvalue changes from implementation to implementation. Still, in all four cases the input matrix can be reconstructed up to 64-bit precision: this means the algorithms are operating as expected up to the precision available to them.

like image 117
jakevdp Avatar answered Nov 05 '22 20:11

jakevdp