Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Numpy: Vectorize np.argwhere

I have the following data structures in numpy:

import numpy as np

a = np.random.rand(267, 173) # dense img matrix
b = np.random.rand(199) # array of probability samples

My goal is to take each entry i in b, find the x,y coordinates/index positions of all values in a that are <= i, then randomly select one of the values in that subset:

from random import randint

for i in b:
  l = np.argwhere(a <= i) # list of img coordinates where pixel <= i
  sample = l[randint(0, len(l)-1)] # random selection from `l`

This "works", but I'd like to vectorize the sampling operation (i.e. replace the for loop with apply_along_axis or similar). Does anyone know how this can be done? Any suggestions would be greatly appreciated!

like image 699
duhaime Avatar asked Jul 30 '19 00:07

duhaime


2 Answers

You can't exactly vectorize np.argmax because you have a random subset size every time. What you can do though, is speed up the computation pretty dramatically with sorting. Sorting the image once will create a single allocation, while masking the image at every step will create a temporary array for the mask and for the extracted elements. With a sorted image, you can just apply np.searchsorted to get the sizes:

a_sorted = np.sort(a.ravel())
indices = np.searchsorted(a_sorted, b, side='right')

You still need a loop to do the sampling, but you can do something like

samples = np.array([a_sorted[np.random.randint(i)] for i in indices])

Getting x-y coordinates instead of sample values is a bit more complicated with this system. You can use np.unravel_index to get the indices, but first you must convert form the reference frame of a_sorted to a.ravel(). If you sort using np.argsort instead of np.sort, you can get the indices in the original array. Fortunately, np.searchsorted supports this exact scenario with the sorter parameter:

a_ind = np.argsort(a, axis=None)
indices = np.searchsorted(a.ravel(), b, side='right', sorter=a_ind)
r, c = np.unravel_index(a_ind[[np.random.randint(i) for i in indices]], a.shape)

r and c are the same size as b, and correspond to the row and column indices in a of each selection based on b. The index conversion depends on the strides in your array, so we'll assume that you're using C order, as 90% of arrays will do by default.

Complexity

Let's say b has size M and a has size N.

Your current algorithm does a linear search through each element of a for each element of b. At each iteration, it allocates a mask for the matching elements (N/2 on average), and then a buffer of the same size to hold the masked choices. This means that the time complexity is on the order of O(M * N) and the space complexity is the same.

My algorithm sorts a first, which is O(N log N). Then it searches for M insertion points, which is O(M log N). Finally, it selects M samples. The space it allocates is one sorted copy of the image and two arrays of size M. It is therefore of O((M + N) log N) time complexity and O(M + N) in space.

like image 196
Mad Physicist Avatar answered Sep 23 '22 21:09

Mad Physicist


Here is an alternative approach argsorting b instead and then binning a accordingly using np.digitize and this post:

import numpy as np
from scipy import sparse
from timeit import timeit
import math

def h_digitize(a,bs,right=False):
    mx,mn = a.max(),a.min()
    asz = mx-mn
    bsz = bs[-1]-bs[0]
    nbins=int(bs.size*math.sqrt(bs.size)*asz/bsz)
    bbs = np.concatenate([[0],((nbins-1)*(bs-mn)/asz).astype(int).clip(0,nbins),[nbins]])
    bins = np.repeat(np.arange(bs.size+1), np.diff(bbs))
    bbs = bbs[:bbs.searchsorted(nbins)]
    bins[bbs] = -1
    aidx = bins[((nbins-1)*(a-mn)/asz).astype(int)]
    ambig = aidx == -1
    aa = a[ambig]
    if aa.size:
        aidx[ambig] = np.digitize(aa,bs,right)
    return aidx

def f_pp():
    bo = b.argsort()
    bs = b[bo]
    aidx = h_digitize(a,bs,right=True).ravel()
    aux = sparse.csr_matrix((aidx,aidx,np.arange(aidx.size+1)),
                            (aidx.size,b.size+1)).tocsc()
    ridx = np.empty(b.size,int)
    ridx[bo] = aux.indices[np.fromiter(map(np.random.randint,aux.indptr[1:-1].tolist()),int,b.size)]
    return np.unravel_index(ridx,a.shape)

def f_mp():
    a_ind = np.argsort(a, axis=None)
    indices = np.searchsorted(a.ravel(), b, sorter=a_ind, side='right')
    return np.unravel_index(a_ind[[np.random.randint(i) for i in indices]], a.shape)


a = np.random.rand(267, 173) # dense img matrix
b = np.random.rand(199) # array of probability samples

# round to test wether equality is handled correctly
a = np.round(a,3)
b = np.round(b,3)

print('pp',timeit(f_pp, number=1000),'ms')
print('mp',timeit(f_mp, number=1000),'ms')

# sanity checks

S = np.max([a[f_pp()] for _ in range(1000)],axis=0)
T = np.max([a[f_mp()] for _ in range(1000)],axis=0)
print(f"inequality satisfied: pp {(S<=b).all()} mp {(T<=b).all()}")
print(f"largest smalles distance to boundary: pp {(b-S).max()} mp {(b-T).max()}")
print(f"equality done right: pp {not (b-S).all()} mp {not (b-T).all()}")

Using a tweaked digitize I'm a bit faster but this may vary with problem size. Also, @MadPhysicist's solution is much less convoluted. With standard digitize we are about equal.

pp 2.620121960993856 ms                                                                                                                                                                                                                                                        
mp 3.301037881989032 ms                                                                                                                                                                                                                                                        
inequality satisfied: pp True mp True
largest smalles distance to boundary: pp 0.0040000000000000036 mp 0.006000000000000005
equality done right: pp True mp True
like image 28
Paul Panzer Avatar answered Sep 22 '22 21:09

Paul Panzer