I'm focusing on the special case where A
is a n x d matrix (where k < d) representing an orthogonal basis for a subspace of R^d and b is known to be inside the subspace.
I thought of using the tools provided with numpy
, however they only work with square matrices. I had the approach of filling in the matrix with some linearly independent vectors to "square" it and then solve, but I could not figure out how to choose those vectors so that they will be linearly independent of the basis vectors, plus I think it's not the only approach and I'm missing something that can make this easier.
is there indeed a simpler approach than the one I mentioned? if not, how indeed do I choose those vectors that would complete A
into a square matrix?
inv(S), The inverse of a matrix is such that if it is multiplied by the original matrix, it results in identity matrix. We can also use np. linalg. inv(S) for non square matrix but in order to not get any error you need to slice the data S.
Answer and Explanation: No, we cannot square a non-square matrix. This is because of the fact that the number of columns of a matrix A must be equal to the number of rows of matrix B in order to calculate AB.
As you mentioned, np.linalg.solve needs a full-rank square matrix.
For all the other linear cases, if you are interested in min||Ax-b||^2.
(you probably are) you can use np.linalg.lstsq.
Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2.
The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.
(bold annotation by me)
This is also mentioned in the docs of the original np.linalg.solve
:
a must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use lstsq for the least-squares best “solution” of the system/equation.
If you have fewer equations than unknowns (assuming you meant to type n < d), you are not expecting a unique solution. You can obtain the solutions using a singular value decomposition.
You can implement this last bit pretty easily in Python using for loops & if statements - the heavy lifting is the decomposition itself. Press et al's excellent Numerical Recipes covers linear algebra in Chapter 2 (freely available version of Numerical Recipes in C here and here). They have a great presentation of SVD that explains both the theory of SVD and how that translates into algorithms (mainly focusing on how to use the result of an SVD). They provide an SVD object that has more functionality than numpy.linalg.svd, but as mentioned, the core functionality is the actual decomposition, and doing things like getting nullspace basis vectors is just dressing around for loops and if statements operating on U, S, and V.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With