So currently I'm working with code looking like:
Q,R = np.linalg.qr(matrix)
Qb = np.dot(Q.T, new_mu[b][n])
x_qr = np.linalg.solve(R, Qb)
mu.append(x_qr)
The code works fine as long as my matrix is square, but as soon as it's not, the system is not solvable and I got errors. If I've understood it right I can't use linalg.solve on non-full rank matrices, but is there a way for me to get across this obstacle without using a lstsquare solution?
No, this is not possible, as specified in the np.linalg.solve docs.
The issue is that given Ax = b, if A is not square, then your equation is either over-determined or under-determined, assuming that all rows in A are linearly independent. This means that there does not exist a single x that solves this equation.
Intuitively, the idea is that if you have n (length of x) variables that you are trying to solve for, then you need exactly n equations to find a unique solution for x, assuming that these equations are not "redundant". In this case, "redundant" means linearly dependent: one equation is equal to the linear combination of one or more of the other equations.
In this scenario, one possibly useful thing to do is to find the x that minimizes norm(b - Ax)^2 (i.e. linear least squares solution):
x, _, _, _ = np.linalg.lsq(A, b)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With