Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Vectorized pythonic way to get count of elements greater than current element

I'd like to compare every entry in array b with its respective column to find how many entries (from that column) are larger than the reference. My code seems embarrassingly slow and I suspect it is due to for loops rather than vector operations.

Can we 'vectorize' the following code?

import numpy as np

n = 1000
m = 200
b = np.random.rand(n,m)
p = np.zeros((n,m))

for i in range(0,n): #rows
   for j in range(0,m):  # cols
     r = b[i,j]  
     p[i,j] = ( ( b[:,j] > r).sum() ) / (n) 

After some more thought, I think sorting each column would improve overall runtime by making the later comparisons much faster.

After some more searching I believe I want column-wise percentileofscore (http://lagrange.univ-lyon1.fr/docs/scipy/0.17.1/generated/scipy.stats.percentileofscore.html)

like image 400
Eruditio Avatar asked Apr 17 '18 14:04

Eruditio


2 Answers

It just needed a bit of deeper study to figure out that we could simply use argsort() indices along each column to get the count of greater than the current element at each iteration.

Approach #1

With that theory in mind, one solution would be simply using two argsort-ing to get the counts -

p = len(b)-1-b.argsort(0).argsort(0)

Approach #2

We could optimize it further, given the fact that the argsort indices are unique numbers. So, the second argsort step could use some array-assignment + advanced-indexing, like so -

def count_occ(b):
    n,m = b.shape     
    out = np.zeros((n,m),dtype=int)
    idx = b.argsort(0)
    out[idx, np.arange(m)] = n-1-np.arange(n)[:,None]
    return out

Finally, divide by n as noted in the question for both the approaches.


Benchmarking

Timing all the approaches posted thus far -

In [174]: np.random.seed(0)
     ...: n = 1000
     ...: m = 200
     ...: b = np.random.rand(n,m)

In [175]: %timeit (len(b)-1-b.argsort(0).argsort(0))/float(n)
100 loops, best of 3: 17.6 ms per loop

In [176]: %timeit count_occ(b)/float(n)
100 loops, best of 3: 12.1 ms per loop

# @Brad Solomon's soln
In [177]: %timeit np.sum(b > b[:, None], axis=-2) / float(n)
1 loop, best of 3: 349 ms per loop

# @marco romelli's loopy soln
In [178]: %timeit loopy_soln(b)/float(n)
10 loops, best of 3: 139 ms per loop
like image 151
Divakar Avatar answered Nov 03 '22 12:11

Divakar


I think the following solution is much faster:

import numpy as np

n = 1000
m = 200
b = np.random.rand(n,m)
p = np.zeros((n,m))

for i in range(m):
    col = []
    for j in range(n):
        col.append((j, b[j,i]))
    a = sorted(col, key=lambda x: x[1], reverse=True)
    for j in range(n):
        p[a[j][0], i] = j

It works column by column and is based on sorting, so with a fast guess I would say O(nmlogn).

EDIT

Benchmark results

Original code: 1.46 s ± 8.28 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

This answer: 178 ms ± 295 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

like image 35
marco romelli Avatar answered Nov 03 '22 12:11

marco romelli