Scipy and Numpy have between them three different functions for finding eigenvectors for a given square matrix, these are:
numpy.linalg.eig(a)
scipy.linalg.eig(a)
, andscipy.sparse.linalg.eig(A, k)
Focusing specifically on the situation that all the optional arguments I've left off the last two are left at their defaults and that a
/A
is real-valued, I am curious about the differences among these three which are ambiguous from the documentation - especially:
k
argument?A
is sparse? (mathematically speaking, rather than being represented as a scipy sparse matrix) Can it be inefficient, or even give wrong results, if this assumption doesn't hold?Advertisements. SciPy is built using the optimized ATLAS LAPACK and BLAS libraries. It has very fast linear algebra capabilities. All of these linear algebra routines expect an object that can be converted into a two-dimensional array.
A vector y satisfying dot(y.T, a) = z * y.T for some number z is called a left eigenvector of a, and, in general, the left and right eigenvectors of a matrix are not necessarily the (perhaps conjugate) transposes of each other. I.e. you need to transpose the vectors in vl . vl[:,i]. T is the i-th left eigenvector.
The linalg solve() function returns the equation ax=b; the returned type is a matrix with a shape identical to the matrix b. This function returns LinAlgError if our first matrix (a) is singular or not square.
We use the multiply() method provided in both csc_matrix and csr_matrix classes to multiply two sparse matrices. We can multiply two matrices of same format( both matrices are csc or csr format) and also of different formats ( one matrix is csc and other is csr format).
The special behaviour of the third one has to do with the Lanczos algorithm, which works very well with sparse matrices. The documentation of scipy.sparse.linalg.eig
says it uses a wrapper for ARPACK, which in turn uses "the Implicitly Restarted Arnoldi Method (IRAM) or, in the case of symmetric matrices, the corresponding variant of the Lanczos algorithm." (1).
Now, the Lanczos algorithm has the property that it works better for large eigenvalues (in fact, it uses the maximum eigenvalue):
In practice, this simple algorithm does not work very well for computing very many of the eigenvectors because any round-off error will tend to introduce slight components of the more significant eigenvectors back into the computation, degrading the accuracy of the computation. (2)
So, whereas the Lanczos algorithm is only an approximation, I guess the other two methods use algos to find the exact eigenvalues -- and seemingly all of them, which probably depends on the algorithms used, too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With