I need to invert a large, dense matrix which I hoped to use Scipy's gmres
to do. Fortunately, the dense matrix A
follows a pattern and I do not need to store the matrix in memory. The LinearOperator
class allows us to construct an object which acts as the matrix for GMRES and can compute directly the matrix vector product A*v
. That is, we write a function mv(v)
which takes as input a vector v
and returns mv(v) = A*v
. Then, we can use the LinearOperator
class to create A_LinOp = LinearOperator(shape = shape, matvec = mv)
. We can put the linear operator into the Scipy gmres
command to evaluate the matrix vector products without ever having to fully load A
into memory.
The documentation for the LinearOperator
is found here: LinearOperator
Documentation.
Here is my problem: to write the routine to compute the matrix vector product mv(v) = A*v
, I need another input vector C
. The entries in A
are of the form A[i,j] = f(C[i] - C[j])
. So, what I really want is for mv
to be of two inputs, one fixed vector input C
, and one variable input v
which we want to compute A*v
.
MATLAB has a similar setup, where would write x = gmres(@(v) mv(v,C),b)
where b
is the right hand side of the problem Ax = b
, , and mv
is the function that takes as variable input v
which we want to compute A*v
and C
is the fixed, known vector which we need for the assembly of A
.
My problem is that I can't figure out how to allow the LinearOperator
class to accept two inputs, one variable and one "fixed" like I can in MATLAB.
Is there a way to do the analogous operation in SciPy? Alternatively, if anyone knows of a better way of inverting a large, dense matrix (50000, 50000)
where the entries follow a pattern, I would greatly appreciate any suggestions.
Thanks!
EDIT: I should have stated this information actually. The matrix is actually (in block form) [A C; C^T 0]
, where A
is N x N
(N
large) and C
is N x 3
, and the 0
is 3 x 3
and C^T
is the transpose of C
. This array C
is the same array as the one mentioned above. The entries of A
follow a pattern A[i,j] = f(C[i] - C[j])
.
I wrote mv(v,C)
to go row by row construct A*v[i]
for i=0,N
, by computing sum f(C[i]-C[j)*v[j]
(actually, I do numpy.dot(FC,v)
where FC[j] = f(C[i]-C[j])
which works well). Then, at the end doing the computations for the C^T
rows. I was hoping to eventually replace the large for loop with a multiprocessing
call to parallelize the for loop, but that's a future thing to consider. I will also look into using Cython to speed up the computations.
This is very late, but if you're still interested...
Your A matrix must be very low rank since it's a nonlinearly transformed version of a rank-2 matrix. Plus it's symmetric. That means it's trivial to inverse: get the truncated eigenvalue decompostion with, say, 5 eigenvalues: A = U*S*U', then invert that: A^-1 = U*S^-1*U'. S is diagonal so this is inexpensive. You can get the truncated eigenvalue decomposition with eigh.
That takes care of A. Then for the rest: use the block matrix inversion formula. Looks nasty, but I will bet you 100,000,000 prussian francs that it's 50x faster than the direct method you were using.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With