Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Speed up python code for computing matrix cofactors

As part of a complex task, I need to compute matrix cofactors. I did this in a straightforward way using this nice code for computing matrix minors. Here is my code:

def matrix_cofactor(matrix):
    C = np.zeros(matrix.shape)
    nrows, ncols = C.shape
    for row in xrange(nrows):
        for col in xrange(ncols):
            minor = matrix[np.array(range(row)+range(row+1,nrows))[:,np.newaxis],
                           np.array(range(col)+range(col+1,ncols))]
            C[row, col] = (-1)**(row+col) * np.linalg.det(minor)
    return C

It turns out that this matrix cofactor code is the bottleneck, and I would like to optimize the code snippet above. Any ideas as to how to do this?

like image 981
gaefan Avatar asked Jun 29 '11 21:06

gaefan


3 Answers

Instead of using the inverse and determinant, I'd suggest using the SVD

def cofactors(A):
    U,sigma,Vt = np.linalg.svd(A)
    N = len(sigma)
    g = np.tile(sigma,N)
    g[::(N+1)] = 1
    G = np.diag(-(-1)**N*np.product(np.reshape(g,(N,N)),1)) 
    return U @ G @ Vt 
like image 78
Greg von Winckel Avatar answered Oct 17 '22 15:10

Greg von Winckel


If your matrix is invertible, the cofactor is related to the inverse:

def matrix_cofactor(matrix):
    return np.linalg.inv(matrix).T * np.linalg.det(matrix)

This gives large speedups (~ 1000x for 50x50 matrices). The main reason is fundamental: this is an O(n^3) algorithm, whereas the minor-det-based one is O(n^5).

This probably means that also for non-invertible matrixes, there is some clever way to calculate the cofactor (i.e., not use the mathematical formula that you use above, but some other equivalent definition).


If you stick with the det-based approach, what you can do is the following:

The majority of the time seems to be spent inside det. (Check out line_profiler to find this out yourself.) You can try to speed that part up by linking Numpy with the Intel MKL, but other than that, there is not much that can be done.

You can speed up the other part of the code like this:

minor = np.zeros([nrows-1, ncols-1])
for row in xrange(nrows):
    for col in xrange(ncols):
        minor[:row,:col] = matrix[:row,:col]
        minor[row:,:col] = matrix[row+1:,:col]
        minor[:row,col:] = matrix[:row,col+1:]
        minor[row:,col:] = matrix[row+1:,col+1:]
        ...

This gains some 10-50% total runtime depending on the size of your matrices. The original code has Python range and list manipulations, which are slower than direct slice indexing. You could try also to be more clever and copy only parts of the minor that actually change --- however, already after the above change, close to 100% of the time is spent inside numpy.linalg.det so that furher optimization of the othe parts does not make so much sense.

like image 31
pv. Avatar answered Oct 17 '22 13:10

pv.


The calculation of np.array(range(row)+range(row+1,nrows))[:,np.newaxis] does not depended on col so you could could move that outside the inner loop and cache the value. Depending on the number of columns you have this might give a small optimization.

like image 43
borrible Avatar answered Oct 17 '22 14:10

borrible