I can test the rank of a matrix using np.linalg.matrix_rank(A) . But how can I test if all the rows of A are orthogonal efficiently?
I could take all pairs of rows and compute the inner product between them but is there a better way?
My matrix has fewer rows than columns and the rows are not unit vectors.
This answer basically summarizes the approaches mentioned in the question and the comments, and adds some comparison/insights about them
Approach #1 -- checking all row-pairs
As you suggested, you can iterate over all row pairs, and compute the inner product. If A.shape==(N,M)
, i.e. you have N rows of size M each, you end up with a O(M*N^2) complexity.
Approach #2 -- matrix multiplication
As suggested in the comments by @JoeKington, you can compute the multiplication A.dot(A.T)
, and check all the non-diagonal elements. Depending on the algorithm used for matrix multiplication, this can be faster than the naive O(M*N^2) algorithm, but only asymptotically better. Unless your matrices are big, they would be slower.
The advantages of approach #1:
The advantages of approach #2:
My bet is that for small matrices, approach #2 would prove faster due to the fact the LA libraries are heavily optimized, and despite the fact they compute the entire multiplication, even after processing the first pair of non-orthogonal rows.
It seems that this will do
product = np.dot(A,A.T)
np.fill_diagonal(product,0)
if (product.any() == 0):
In general, to find an orthogonal basis of the range space of some matrix X, one can compute the QR decomposition of this matrix (using Givens rotations or Householder reflectors). Q is an orthogonal matrix and R upper triangular. The columns of Q corresponding to non-zero diagonal entries of R form an orthonormal basis of the range space.
If the columns of X=AT, i.e., the rows of A, already are orthogonal, then the QR decomposition will necessarily have the R factor diagonal, where the diagonal entries are plus or minus the lengths of the columns of X resp. the rows of A.
Common folklore has it that this approach is numerically better behaved than the computation of the product A*AT=RT*R. This may only matter for larger matrices. The computation is not as straightforward as the matrix product, however, the amount of operations is of the same size.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With