Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

efficient computation of Trace(AB^{-1}) given A and B

I have two square matrices A and B. A is symmetric, B is symmetric positive definite. I would like to compute $trace(A.B^{-1})$. For now, I compute the Cholesky decomposition of B, solve for C in the equation $A=C.B$ and sum up the diagonal elements.

Is there a more efficient way of proceeding?

I plan on using Eigen. Could you provide an implementation if the matrices are sparse (A can often be diagonal, B is often band-diagonal)?

like image 591
yannick Avatar asked Sep 22 '11 01:09

yannick


1 Answers

If B is sparse, it may be efficient (i.e., O(n), assuming good condition number of B) to solve for x_i in

B x_i = a_i

(sample Conjugate Gradient code is given on Wikipedia). Taking a_i to be the column vectors of A, you get the matrix B^{-1} A in O(n^2). Then you can sum the diagonal elements to get the trace. Generally, it's easier to do this sparse inverse multiplication than to get the full set of eigenvalues. For comparison, Cholesky decomposition is O(n^3). (see Darren Engwirda's comment below about Cholesky).

If you only need an approximation to the trace, you can actually reduce the cost to O(q n) by averaging

r^T (A B^{-1}) r

over q random vectors r. Usually q << n. This is an unbiased estimate provided that the components of the random vector r satisfy

< r_i r_j > = \delta_{ij}

where < ... > indicates an average over the distribution of r. For example, components r_i could be independent gaussian distributed with unit variance. Or they could be selected uniformly from +-1. Typically the trace scales like O(n) and the error in the trace estimate scales like O(sqrt(n/q)), so the relative error scales as O(sqrt(1/nq)).

like image 120
Kipton Barros Avatar answered Oct 20 '22 04:10

Kipton Barros