Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

numpy more efficient submatrix

Tags:

python

numpy

Currently I am using the below code to get the submatrix with ith row and jth column removed, but after profiling my code, it seems to be one of the main bottlenecks in my code. Is there a more efficient way?

def submatrix(A, i, j):
    logger.debug('submatrix(%r, %r, %r)', A, i, j)
    B = empty(shape=tuple(x - 1 for x in A.shape), dtype=int)
    B[:i, :j] = A[:i, :j]
    B[i:, :j] = A[i+1:, :j]
    B[:i, j:] = A[:i, j+1:]
    B[i:, j:] = A[i+1:, j+1:]
    return B

         25015049 function calls (24599369 primitive calls) in 44.587 seconds

   Ordered by: internal time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  3983040   15.541    0.000   20.719    0.000 defmatrix.py:301(__getitem__)
   415680   10.216    0.000   33.069    0.000 hill.py:127(submatrix)
 415686/6    3.232    0.000   44.578    7.430 hill.py:112(det)

Edit: Jaime has provided a good way to approximate modular inverses using the regular inverse and determinant, however with large bases (modulo 256 in my case), the inaccuracy is enough to render the the whole thing moot. The main time sink appears to actually be getitem in numpy, but I believe that is caused by these lines:

    B[:i, :j] = A[:i, :j]
    B[i:, :j] = A[i+1:, :j]
    B[:i, j:] = A[:i, j+1:]
    B[i:, j:] = A[i+1:, j+1:]

It's possible the bottle-neck isn't copying matrices around in memory, but matrix entry access.

like image 261
darkfeline Avatar asked Nov 02 '25 14:11

darkfeline


1 Answers

Hmm... you are only copying matrices, so it will probably be difficult to speed up much, but one thing you can try is to verify that A is in a contiguous block of memory, which could speed access by the C code. Look at numpy.ascontiguousarray().

like image 193
Bitwise Avatar answered Nov 04 '25 06:11

Bitwise



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!