Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Grouping indices of unique elements in numpy

I have many large (>100,000,000) lists of integers that contain many duplicates. I want to get the indices where each of the element occur. Currently I am doing something like this:

import numpy as np
from collections import defaultdict

a = np.array([1, 2, 6, 4, 2, 3, 2])
d=defaultdict(list)
for i,e in enumerate(a):
    d[e].append(i)

d
defaultdict(<type 'list'>, {1: [0], 2: [1, 4, 6], 3: [5], 4: [3], 6: [2]})

This method of iterating through each element is time consuming. Is there a efficient or vectorized way to do this?

Edit1 I tried the methods of Acorbe and Jaime on the following

a = np.random.randint(2000, size=10000000)

The results are

original: 5.01767015457 secs
Acorbe: 6.11163902283 secs
Jaime: 3.79637312889 secs
like image 802
imsc Avatar asked Apr 24 '14 12:04

imsc


People also ask

What does [: :] mean on NumPy arrays?

The [:, :] stands for everything from the beginning to the end just like for lists. The difference is that the first : stands for first and the second : for the second dimension. a = numpy. zeros((3, 3)) In [132]: a Out[132]: array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]])

Which of the following is a possible way to find unique rows in a NumPy array?

To find unique rows in a NumPy array we are using numpy. unique() function of NumPy library.


2 Answers

I know this is an old question, but I was recently working on a similar thing where performance is critical and so I experimented extensively with timing. I hope my findings will be beneficial to the community.

Jaime's solution based on np.unique is the fastest algorithm possible in Python, but with one caveat: the indices are not ordered (because numpy uses quicksort by default) and the result is different from OP's original algorithm (called hereafter naive). Using the stable option fixes it, but it slows things down a bit.

The naive method can be improved using Python's built-in array module like this:

import array
from collections import defaultdict

a = np.array(...)  # 1D, int array
d = defaultdict(lambda: array.array("L"))
alist = array.array("L")
alist.frombytes(a.tobytes())
for n in range(len(alist)):
    d[alist[n]].append(n)

It's just fractions slower than Jaime's solution with stable sort.

Here's some testing done on my platform with Python 3

Best of 5
Naive method: 0.21274029999999988 s
Naive improved: 0.13265090000000002 s
Unique quick: 0.073496 s
Unique stable: 0.1235801999999997 s

The results from the naive method, naive improved, and unique stable are dictionaries with sorted lists of indices. Indices from unique quick are not sorted.

The benchmark code

import array
import timeit
from collections import defaultdict

import numpy as np

def count_naive(a):
    d = defaultdict(list)
    for n, e in enumerate(a):
        d[e].append(n)
    return dict(d)

def count_improved(a):
    d = defaultdict(lambda: array.array("L"))
    alist = array.array("L")
    alist.frombytes(a.tobytes())
    for n in range(len(alist)):
        d[alist[n]].append(n)
    return {n: indices.tolist() for n, indices in d.items()}

def count_unique(a):
    sorted_idx = np.argsort(a)  # , kind='stable')
    counts = np.bincount(a)
    split_idx = np.split(sorted_idx, np.cumsum(counts[:-1]))
    return {n: indices.tolist() for n, indices in enumerate(split_idx)}

def count_stable(a):
    sorted_idx = np.argsort(a, kind="stable")
    counts = np.bincount(a)
    split_idx = np.split(sorted_idx, np.cumsum(counts[:-1]))
    return {n: indices.tolist() for n, indices in enumerate(split_idx)}

a = np.random.randint(1000, size=1000000)

trials = 5
t_naive = timeit.repeat("count_naive(a)", globals=globals(), repeat=trials, number=1)
t_improved = timeit.repeat("count_improved(a)", globals=globals(), repeat=trials, number=1)
t_unique = timeit.repeat("count_unique(a)", globals=globals(), repeat=trials, number=1)
t_stable = timeit.repeat("count_stable(a)", globals=globals(), repeat=trials, number=1)

print(f"Best of {trials}")
print(f"Naive method: {min(t_naive)} s")
print(f"Naive improved: {min(t_improved)} s")
print(f"Unique quick: {min(t_unique)} s")
print(f"Unique stable: {min(t_stable)} s")

N.B. All functions are written in a way that they all return Dict[int, list] so the results can be directly compared.

like image 52
tboschi Avatar answered Oct 16 '22 15:10

tboschi


This is very similar to what was asked here, so what follows is an adaptation of my answer there. The simplest way to vectorize this is to use sorting. The following code borrows a lot from the implementation of np.unique for the upcoming version 1.9, which includes unique item counting functionality, see here:

>>> a = np.array([1, 2, 6, 4, 2, 3, 2])
>>> sort_idx = np.argsort(a)
>>> a_sorted = a[idx]
>>> unq_first = np.concatenate(([True], a_sorted[1:] != a_sorted[:-1]))
>>> unq_items = a_sorted[unq_first]
>>> unq_count = np.diff(np.nonzero(unq_first)[0])

and now:

>>> unq_items
array([1, 2, 3, 4, 6])
>>> unq_count
array([1, 3, 1, 1, 1], dtype=int64)

To get the positional indices for each values, we simply do:

>>> unq_idx = np.split(sort_idx, np.cumsum(unq_count))
>>> unq_idx
[array([0], dtype=int64), array([1, 4, 6], dtype=int64), array([5], dtype=int64),
 array([3], dtype=int64), array([2], dtype=int64)]

And you can now construct your dictionary zipping unq_items and unq_idx.

Note that unq_count doesn't count the occurrences of the last unique item, because that is not needed to split the index array. If you wanted to have all the values you could do:

>>> unq_count = np.diff(np.concatenate(np.nonzero(unq_first) + ([a.size],)))
>>> unq_idx = np.split(sort_idx, np.cumsum(unq_count[:-1]))
like image 27
Jaime Avatar answered Oct 16 '22 16:10

Jaime