I was running a 'sanity test' today by comparing the run times of two elementary ways to solve Ax = b where A is square and b is a column vector:
A = rand(1000,1000); b = rand(1000,1); Xmat = zeros(1000,1001);
tic; [L,U] = lu(A); x = U\(L\y); toc;
tic; Xmat = rref([A b]); toc;
The output:
Elapsed time is 0.018528 seconds.
Elapsed time is 10.215791 seconds.
I find this discrepancy surprising, because for random, dense matrices I expect the LU step to be about as costly as Gaussian elimination (which in turn seems to be about as costly as finding the reduced echelon form of the augmented matrix, unless I'm missing something). I didn't expect LU to do so well for just a single vector b. The discrepancy also persists for more modest linear systems (say 100x100 matrices). What's happening here?
Gaussian elimination is not how LU is computed. Also LU is a "primitive" decomposition that is highly optimized in the Fortran LAPACK over decades. rref
is computed at the matlab level by row swapping and pivoting. Moreover, rref
checks the matrix entries and converts the entries into fractional representations if possible via rats
function. That gives a huge overhead for rref
. Because the only possible use case for rref
is either educational or demonstration purposes. It is numerically not as reliable as LU decomposition.
That's the speed difference you are seeing.
Don't use rref
if you are doing serious computations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With