Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Need help vectorizing code or optimizing

I am trying to do a double integral by first interpolating the data to make a surface. I am using numba to try and speed this process up, but it's just taking too long.

Here is my code, with the images needed to run the code located at here and here.

like image 909
Wesley Bowman Avatar asked Feb 17 '23 00:02

Wesley Bowman


1 Answers

Noting that your code has a quadruple-nested set of for loops, I focused on optimizing the inner pair. Here's the old code:

for i in xrange(K.shape[0]):
    for j in xrange(K.shape[1]):

        print(i,j)
        '''create an r vector '''
        r=(i*distX,j*distY,z)

        for x in xrange(img.shape[0]):
            for y in xrange(img.shape[1]):
                '''create an ksi vector, then calculate
                   it's norm, and the dot product of r and ksi'''
                ksi=(x*distX,y*distY,z)
                ksiNorm=np.linalg.norm(ksi)
                ksiDotR=float(np.dot(ksi,r))

                '''calculate the integrand'''
                temp[x,y]=img[x,y]*np.exp(1j*k*ksiDotR/ksiNorm)

        '''interpolate so that we can do the integral and take the integral'''
        temp2=rbs(a,b,temp.real)
        K[i,j]=temp2.integral(0,n,0,m)

Since K and img are each about 2000x2000, the innermost statements need to be executed sixteen trillion times. This is simply not practical using Python, but we can shift the work into C and/or Fortran using NumPy to vectorize. I did this one careful step at a time to try to make sure the results will match; here's what I ended up with:

'''create all r vectors'''
R = np.empty((K.shape[0], K.shape[1], 3))
R[:,:,0] = np.repeat(np.arange(K.shape[0]), K.shape[1]).reshape(K.shape) * distX
R[:,:,1] = np.arange(K.shape[1]) * distY
R[:,:,2] = z

'''create all ksi vectors'''
KSI = np.empty((img.shape[0], img.shape[1], 3))
KSI[:,:,0] = np.repeat(np.arange(img.shape[0]), img.shape[1]).reshape(img.shape) * distX
KSI[:,:,1] = np.arange(img.shape[1]) * distY
KSI[:,:,2] = z

# vectorized 2-norm; see http://stackoverflow.com/a/7741976/4323                                                    
KSInorm = np.sum(np.abs(KSI)**2,axis=-1)**(1./2)

# loop over entire K, which is same shape as img, rows first                                                        
# this loop populates K, one pixel at a time (so can be parallelized)                                               
for i in xrange(K.shape[0]):                                                                                    
    for j in xrange(K.shape[1]):                                                                                

        print(i, j)

        KSIdotR = np.dot(KSI, R[i,j])
        temp = img * np.exp(1j * k * KSIdotR / KSInorm)

        '''interpolate so that we can do the integral and take the integral'''
        temp2 = rbs(a, b, temp.real)
        K[i,j] = temp2.integral(0, n, 0, m)

The inner pair of loops is now completely gone, replaced by vectorized operations done in advance (at a space cost linear in the size of the inputs).

This reduces the time per iteration of the outer two loops from 340 seconds to 1.3 seconds on my Macbook Air 1.6 GHz i5, without using Numba. Of the 1.3 seconds per iteration, 0.68 seconds are spent in the rbs function, which is scipy.interpolate.RectBivariateSpline. There is probably room to optimize further--here are some ideas:

  1. Reenable Numba. I don't have it on my system. It may not make much difference at this point, but easy for you to test.
  2. Do more domain-specific optimization, such as trying to simplify the fundamental calculations being done. My optimizations are intended to be lossless, and I don't know your problem domain so I can't optimize as deeply as you may be able to.
  3. Try to vectorize the remaining loops. This may be tough unless you are willing to replace the scipy RBS function with something supporting multiple calculations per call.
  4. Get a faster CPU. Mine is pretty slow; you can probably get a speedup of at least 2x simply by using a better computer than my tiny laptop.
  5. Downsample your data. Your test images are 2000x2000 pixels, but contain fairly little detail. If you cut their linear dimensions by 2-10x, you'd get a huge speedup.

So that's it for me for now. Where does this leave you? Assuming a slightly better computer and no further optimization work, even the optimized code would take about a month to process your test images. If you only have to do this once, maybe that's fine. If you need to do it more often, or need to iterate on the code as you try different things, you probably need to keep optimizing--starting with that RBS function which consumes more than half the time now.

Bonus tip: your code would be a lot easier to deal with if it didn't have nearly-identical variable names like k and K, nor used j as a variable name and also as a complex number suffix (0j).

like image 70
John Zwinck Avatar answered Feb 18 '23 13:02

John Zwinck