Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

solving Ax =b for a non-square matrix A using python

I'm focusing on the special case where A is a n x d matrix (where k < d) representing an orthogonal basis for a subspace of R^d and b is known to be inside the subspace. I thought of using the tools provided with numpy, however they only work with square matrices. I had the approach of filling in the matrix with some linearly independent vectors to "square" it and then solve, but I could not figure out how to choose those vectors so that they will be linearly independent of the basis vectors, plus I think it's not the only approach and I'm missing something that can make this easier. is there indeed a simpler approach than the one I mentioned? if not, how indeed do I choose those vectors that would complete Ainto a square matrix?

like image 609
Ron Tubman Avatar asked Sep 23 '17 07:09

Ron Tubman


People also ask

How do you find the inverse of a non square matrix in python?

inv(S), The inverse of a matrix is such that if it is multiplied by the original matrix, it results in identity matrix. We can also use np. linalg. inv(S) for non square matrix but in order to not get any error you need to slice the data S.

How do you square a non square matrix?

Answer and Explanation: No, we cannot square a non-square matrix. This is because of the fact that the number of columns of a matrix A must be equal to the number of rows of matrix B in order to calculate AB.


2 Answers

As you mentioned, np.linalg.solve needs a full-rank square matrix.

For all the other linear cases, if you are interested in min||Ax-b||^2. (you probably are) you can use np.linalg.lstsq.

Solves the equation a x = b by computing a vector x that minimizes the Euclidean 2-norm || b - a x ||^2.

The equation may be under-, well-, or over- determined (i.e., the number of linearly independent rows of a can be less than, equal to, or greater than its number of linearly independent columns). If a is square and of full rank, then x (but for round-off error) is the “exact” solution of the equation.

(bold annotation by me)

This is also mentioned in the docs of the original np.linalg.solve:

a must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use lstsq for the least-squares best “solution” of the system/equation.

like image 118
sascha Avatar answered Oct 02 '22 15:10

sascha


If you have fewer equations than unknowns (assuming you meant to type n < d), you are not expecting a unique solution. You can obtain the solutions using a singular value decomposition.

  • Start by finding the SVD, consisting of three matrices U S V^T using numpy.linalg.svd. Then you can get the pseudoinverse of A from V [ diag(1/s_j) ] U^T, which gives you one solution.
  • Your final solution will be the one solution you found, plus linear combinations of nullspace basis vectors of A.
  • To find the nullspace basis vectors of A, extract columns j from the matrix V that correspond to singular values s_j from the matrix S that are zero (or, below some "small" threshold).

You can implement this last bit pretty easily in Python using for loops & if statements - the heavy lifting is the decomposition itself. Press et al's excellent Numerical Recipes covers linear algebra in Chapter 2 (freely available version of Numerical Recipes in C here and here). They have a great presentation of SVD that explains both the theory of SVD and how that translates into algorithms (mainly focusing on how to use the result of an SVD). They provide an SVD object that has more functionality than numpy.linalg.svd, but as mentioned, the core functionality is the actual decomposition, and doing things like getting nullspace basis vectors is just dressing around for loops and if statements operating on U, S, and V.

like image 36
charlesreid1 Avatar answered Oct 02 '22 13:10

charlesreid1