I've written up a piece of code to solve the 1-dimensional Schrodinger equation. While the numpy.linalg.eig() routine has been working fine for the harmonic oscillator, it seems to add one spurious solution for the Coulomb potential. On the other hand Scipy's sparse.linalg.eigsh() appears to do well. Here is my script:
import numpy as np
import matplotlib.pyplot as plt
from scipy.sparse import diags
from scipy.sparse.linalg import eigsh
N = 500
x0 = 8
xMin, xMax = -x0, x0
xstep = (xMax - xMin) / (N - 1)
x = np.linspace(xMin, xMax, N)
k = np.array([np.ones(N-1),-2*np.ones(N),np.ones(N-1)])
offset = [-1,0,1]
Lapl = diags(k,offset).toarray() / (xstep**2)
T = -Lapl *.5
V = -1./(np.abs(x)) #Coulomb
#V = .5 * x**2 #HO
H = T.copy()
for i in range(N):
H[i,i] += V[i]
#vals, vecs = np.linalg.eig(H)
vals, vecs = eigsh(H, which='SM')
idx = vals.argsort()[::1]
vals, vecs = vals[idx], vecs[:,idx]
#exact solution
Hatom = (np.pi)**(-1./2) * np.exp(- np.abs(x)) * np.abs(x) * np.sqrt(4 * np.pi)
norm = np.sum(Hatom**2)
Hatom = Hatom / np.sqrt(norm)
#numerical solution
GS = vecs[:,0] #0th is the gs if using sp's eigsh, but is spurious if using np.linalg.eig
normGS = np.sum(GS**2)
GS = GS / np.sqrt(normGS)
plt.plot(x, Hatom**2, label = 'exact')
plt.plot(x, GS**2, label = 'numeric')
plt.legend()
plt.show()
print( np.round(vals[:10], 4) )
which yields the following plots (I'm also having trouble showing the pictures directly here, sorry about that!):
Using numpy:np_0th_eigenvector_spurious & np_1th_eigenvector_gs
Using scipy: sp_gs
I'd expect this to come from a different handling of the singularity of the Coulomb potential (though I chose an even number of points to avoid x = 0), since both numpy and scipy work fine for the harmonic oscillator and Morse potential (not reproduced here for brevity). Yet this makes tricky how to handle arbitrary potentials!
What's more, the eigenvalues for the Coulomb potential land pretty far from the 1/n^2 sequence (the lowest one comes from using numpy):
vals: [-15.220171 -0.500363 -0.331042 -0.085621 -0.02475 0.242598
0.344308 0.741555 0.885751 1.402606]
What am I doing wrong here? Does numpy/scipy contains a routine I could use safely to solve the eigenvalue problem, regardless of the potential?
Thanks in advance!
The argument which='SM'
in eigsh
tells the function to find the k
eigenvalues with the smallest magnitude. The most negative eigenvalue is about -15.22, but this will not be included in the results of eigsh
, because there are many other positive and negative eigenvalues with magnitudes less than 15.22. If you want to compare the results to those of numpy.linalg.eig
(or better, numpy.linalg.eigh
), use which='SA'
. That tells eigsh
to find the algebraically smallest values. If you do that, then the vals
that you compute with eigsh
will agree with the first 6 eigenvalues computed by numpy.linalg.eigh
.
After making this change, you'll have to change the selection of the numerically computed eigenvector to be plotted to
GS = vecs[:,1]
to have the plots of the exact and numerical results agree.
Following what @Warren said about which
k eigenvectors and eigenvalues to find, you can also take profit of the tridiagonal structure of matrix H
, by using scipy.linalg.eigh_tridiagonal here instead of just np.linalg.eigh
which gives you :
from scipy.linalg import eigh_tridiagonal
d = np.diag(H)
e = np.diag(H,k=1)
vals, vecs = eigh_tridiagonal(d,e)
idx = vals.argsort()[::1]
vals, vecs = vals[idx], vecs[:,idx]
and select accordingly GS = vecs[:,1]
to get the same results with 2 times faster computation :
%timeit scipy.linalg.eigh_tridiagonal(d,e)
>>> 10.7 ms ± 37.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit numpy.linalg.eigh(H)
>>> 19.4 ms ± 28.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit scipy.sparse.linalg.eigsh(H, which='SA')
>>> 115 ms ± 1.98 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With