If I have homogeneous linear equations like this
array([[-0.75, 0.25, 0.25, 0.25],
[ 1. , -1. , 0. , 0. ],
[ 1. , 0. , -1. , 0. ],
[ 1. , 0. , 0. , -1. ]])
And I want to get a non-zero solution for it. How can it be done with NumPy?
EDIT
linalg.solve only works on A * x = b where b does not contains only 0.
For that matter, the best solution of an over constrained homogeneous linear system is the eigenvector associated with the smallest eigenvalue. So given U as the coefficient matrix of the system, the solution is:
import numpy as np
def solution(U):
# find the eigenvalues and eigenvector of U(transpose).U
e_vals, e_vecs = np.linalg.eig(np.dot(U.T, U))
# extract the eigenvector (column) associated with the minimum eigenvalue
return e_vecs[:, np.argmin(e_vals)]
if you are familiar with the SVD decomposition you can use:
_, _, V = np.linalg.svd(A, full_matrices=True)
v=V[-1]
then v is the last row of the last matrix in the decomposition, which correspond to the minimal singular value. thus if you start with a matrix A nxm with n>m then the last singular value must be zero, it means that Av=0
You can use an SVD or a QR decomposition to compute the null space of the linear system, e.g., something like:
import numpy
def null(A, eps=1e-15):
u, s, vh = numpy.linalg.svd(A)
null_space = numpy.compress(s <= eps, vh, axis=0)
return null_space.T
This yields for your example:
>>> A
matrix([[-0.75, 0.25, 0.25, 0.25],
[ 1. , -1. , 0. , 0. ],
[ 1. , 0. , -1. , 0. ],
[ 1. , 0. , 0. , -1. ]])
>>> null(A).T
array([[-0.5, -0.5, -0.5, -0.5]])
>>> (A*null(A)).T
matrix([[ 1.66533454e-16, -1.66533454e-16, -2.22044605e-16, -2.22044605e-16]])
See also the section Numerical computation of null space on Wikipedia.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With