Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Get cumulative count per 2d array

I have general data, e.g. strings:

np.random.seed(343)

arr = np.sort(np.random.randint(5, size=(10, 10)), axis=1).astype(str)
print (arr)
[['0' '1' '1' '2' '2' '3' '3' '4' '4' '4']
 ['1' '2' '2' '2' '3' '3' '3' '4' '4' '4']
 ['0' '2' '2' '2' '2' '3' '3' '4' '4' '4']
 ['0' '1' '2' '2' '3' '3' '3' '4' '4' '4']
 ['0' '1' '1' '1' '2' '2' '2' '2' '4' '4']
 ['0' '0' '1' '1' '2' '3' '3' '3' '4' '4']
 ['0' '0' '2' '2' '2' '2' '2' '2' '3' '4']
 ['0' '0' '1' '1' '1' '2' '2' '2' '3' '3']
 ['0' '1' '1' '2' '2' '2' '3' '4' '4' '4']
 ['0' '1' '1' '2' '2' '2' '2' '2' '4' '4']]

I need count with reset if difference for counter of cumulative values, so is used pandas.

First create DataFrame:

df = pd.DataFrame(arr)
print (df)
   0  1  2  3  4  5  6  7  8  9
0  0  1  1  2  2  3  3  4  4  4
1  1  2  2  2  3  3  3  4  4  4
2  0  2  2  2  2  3  3  4  4  4
3  0  1  2  2  3  3  3  4  4  4
4  0  1  1  1  2  2  2  2  4  4
5  0  0  1  1  2  3  3  3  4  4
6  0  0  2  2  2  2  2  2  3  4
7  0  0  1  1  1  2  2  2  3  3
8  0  1  1  2  2  2  3  4  4  4
9  0  1  1  2  2  2  2  2  4  4

How it working for one column:

First compare shifted data and add cumulative sum:

a = (df[0] != df[0].shift()).cumsum()
print (a)
0    1
1    2
2    3
3    3
4    3
5    3
6    3
7    3
8    3
9    3
Name: 0, dtype: int32

And then call GroupBy.cumcount:

b = a.groupby(a).cumcount() + 1
print (b)
0    1
1    1
2    1
3    2
4    3
5    4
6    5
7    6
8    7
9    8
dtype: int64

If want apply solution to all columns is possible use apply:

print (df.apply(lambda x: x.groupby((x != x.shift()).cumsum()).cumcount() + 1))
   0  1  2  3  4  5  6  7  8  9
0  1  1  1  1  1  1  1  1  1  1
1  1  1  1  2  1  2  2  2  2  2
2  1  2  2  3  1  3  3  3  3  3
3  2  1  3  4  1  4  4  4  4  4
4  3  2  1  1  1  1  1  1  5  5
5  4  1  2  2  2  1  1  1  6  6
6  5  2  1  1  3  1  1  1  1  7
7  6  3  1  1  1  2  2  2  2  1
8  7  1  2  1  1  3  1  1  1  1
9  8  2  3  2  2  4  1  1  2  2

But it is slow, because large data. Is possible create some fast numpy solution?

I find solutions working only for 1d array.

like image 980
jezrael Avatar asked Dec 04 '18 14:12

jezrael


2 Answers

General Idea

Consider the generic case where we perform this cumulative counting or if you think of them as ranges, we could call them - Grouped ranges.

Now, the idea starts off simple - Compare one-off slices along the respective axis to look for inequalities. Pad with True at the start of each row/col (depending on axis of counting).

Then, it gets complicated - Setup an ID array with the intention that we would a final cumsum which would be desired output in its flattened order. So, the setup starts off with initializing a 1s array with same shape as input array. At each group start in input, offset the ID array with the previous group lengths. Follow the code (should give more insight) on how we would do it for each row -

def grp_range_2drow(a, start=0):
    # Get grouped ranges along each row with resetting at places where
    # consecutive elements differ
    
    # Input(s) : a is 2D input array
    
    # Store shape info
    m,n = a.shape
    
    # Compare one-off slices for each row and pad with True's at starts
    # Those True's indicate start of each group
    p = np.ones((m,1),dtype=bool)
    a1 = np.concatenate((p, a[:,:-1] != a[:,1:]),axis=1)
    
    # Get indices of group starts in flattened version
    d = np.flatnonzero(a1)

    # Setup ID array to be cumsumed finally for desired o/p 
    # Assign into starts with previous group lengths. 
    # Thus, when cumsumed on flattened version would give us flattened desired
    # output. Finally reshape back to 2D  
    c = np.ones(m*n,dtype=int)
    c[d[1:]] = d[:-1]-d[1:]+1
    c[0] = start
    return c.cumsum().reshape(m,n)

We would extend this to solve for a generic case of row and columns. For the columns case, we would simply transpose, feed to earlier row-solution and finally transpose back, like so -

def grp_range_2d(a, start=0, axis=1):
    # Get grouped ranges along specified axis with resetting at places where
    # consecutive elements differ
    
    # Input(s) : a is 2D input array

    if axis not in [0,1]:
        raise Exception("Invalid axis")

    if axis==1:
        return grp_range_2drow(a, start=start)
    else:
        return grp_range_2drow(a.T, start=start).T

Sample run

Let's consider a sample run as would find grouped ranges along each column with each group starting with 1 -

In [330]: np.random.seed(0)

In [331]: a = np.random.randint(1,3,(10,10))

In [333]: a
Out[333]: 
array([[1, 2, 2, 1, 2, 2, 2, 2, 2, 2],
       [2, 1, 1, 2, 1, 1, 1, 1, 1, 2],
       [1, 2, 2, 1, 1, 2, 2, 2, 2, 1],
       [2, 1, 2, 1, 2, 2, 1, 2, 2, 1],
       [1, 2, 1, 2, 2, 2, 2, 2, 1, 2],
       [1, 2, 2, 2, 2, 1, 2, 1, 1, 2],
       [2, 1, 2, 1, 2, 1, 1, 1, 1, 1],
       [2, 2, 1, 1, 1, 2, 2, 1, 2, 1],
       [1, 2, 1, 2, 2, 2, 2, 2, 2, 1],
       [2, 2, 1, 1, 2, 1, 1, 2, 2, 1]])

In [334]: grp_range_2d(a, start=1, axis=0)
Out[334]: 
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
       [1, 1, 1, 1, 1, 1, 1, 1, 1, 2],
       [1, 1, 1, 1, 2, 1, 1, 1, 1, 1],
       [1, 1, 2, 2, 1, 2, 1, 2, 2, 2],
       [1, 1, 1, 1, 2, 3, 1, 3, 1, 1],
       [2, 2, 1, 2, 3, 1, 2, 1, 2, 2],
       [1, 1, 2, 1, 4, 2, 1, 2, 3, 1],
       [2, 1, 1, 2, 1, 1, 1, 3, 1, 2],
       [1, 2, 2, 1, 1, 2, 2, 1, 2, 3],
       [1, 3, 3, 1, 2, 1, 1, 2, 3, 4]])

Thus, to solve our case for dataframe input & output, it would be -

out = grp_range_2d(df.values, start=1,axis=0)
pd.DataFrame(out,columns=df.columns,index=df.index)
like image 140
Divakar Avatar answered Oct 31 '22 08:10

Divakar


And the numba solution. For such tricky problem, it always wins, here by a 7x factor vs numpy, since only one pass on res is done.

from numba import njit 
@njit
def thefunc(arrc):
    m,n=arrc.shape
    res=np.empty((m+1,n),np.uint32)
    res[0]=1
    for i in range(1,m+1):
        for j in range(n):
            if arrc[i-1,j]:
                res[i,j]=res[i-1,j]+1
            else : res[i,j]=1
    return res 

def numbering(arr):return thefunc(arr[1:]==arr[:-1])

I need to externalize arr[1:]==arr[:-1] since numba doesn't support strings.

In [75]: %timeit numbering(arr)
13.7 µs ± 373 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

In [76]: %timeit grp_range_2dcol(arr)
111 µs ± 18.3 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

For bigger array (100 000 rows x 100 cols), the gap is not so wide :

In [168]: %timeit a=grp_range_2dcol(arr)
1.54 s ± 11.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [169]: %timeit a=numbering(arr)
625 ms ± 43.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

If arr can be convert to 'S8', we can win a lot of time :

In [398]: %timeit arr[1:]==arr[:-1]
584 ms ± 12.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [399]: %timeit arr.view(np.uint64)[1:]==arr.view(np.uint64)[:-1]
196 ms ± 18.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
like image 6
B. M. Avatar answered Oct 31 '22 06:10

B. M.