I have two large NumPy arrays each with shape of (519990,) that look something like this:
Order = array([0, 0, 0, 5, 6, 10, 14, 14, 14, 23, 23, 39])
Letters = array([A, B, C, D, E, F, G, H, I, J, K, L])
As you can see the first array is always in ascending and a positive number. I would like to group everything within the Letters to Order to turn out looking like this:
{0:[A,B,C], 5:[D], 6:[E], 10:[F], 14:[G, H, I], 23:[J, K], 39:[L]}
The code I have to do this is:
df = pd.DataFrame()
df['order'] = Order
df['letters'] = Letters
linearDict = df.grouby('order').apply(lambda dfg:dfg.drop('order', axis=1).to_dict(orient='list')).to_dict()
endProduct = {}
for k, v in linearDict.items():
endProduct[k] = np.array(linearDict[k]['letter'][0:])
enProduct = {0:array([A,B,C]), 5:array([D]), 6:array([E]), 10:array([F]), 14:array([G, H, I]), 23:array([J, K]), 39:array([L])}
My problem is this process is BEYOND slow. It's such a drain on the system that it causes my Jupyter Notebook to crash. Is there a faster way of doing this?
We could leverage the fact that Order
is sorted, to simply slice Letters
after getting the intervaled-indices, like so -
def numpy_slice(Order, Letters):
Order = np.asarray(Order)
Letters = np.asarray(Letters)
idx = np.flatnonzero(np.r_[True,Order[:-1]!=Order[1:],True])
return {Order[i]:Letters[i:j] for (i,j) in zip(idx[:-1],idx[1:])}
Sample run -
In [66]: Order
Out[66]: array([16, 16, 16, 16, 23, 30, 33, 33, 39, 39, 39, 39, 39, 39, 39])
In [67]: Letters
Out[67]:
array(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M',
'N', 'O'], dtype='<U1')
In [68]: numpy_slice(Order, Letters)
Out[68]:
{16: array(['A', 'B', 'C', 'D'], dtype='<U1'),
23: array(['E'], dtype='<U1'),
30: array(['F'], dtype='<U1'),
33: array(['G', 'H'], dtype='<U1'),
39: array(['I', 'J', 'K', 'L', 'M', 'N', 'O'], dtype='<U1')}
Use:
data = df.groupby('order')['letters'].agg(list).to_dict()
We can further improve the performance by passing sort=False
and agg to tuple
instead of list
:
data = df.groupby('order', sort=False)['letters'].agg(tuple).to_dict()
Result:
# print(data)
{0: ['A', 'B', 'C'], 1: ['D', 'E', 'F'], 2: ['G', 'H', 'I'], 3: ['J', 'K', 'L']}
timeit
performance results:
df.shape
(1200000, 2)
o = np.repeat([0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3], 100000)
l = np.repeat([A, B, C, D, E, F, G, H, I, J, K, L], 100000)
***Fastest answer***
%%timeit -n10 @Divakar
idx = np.flatnonzero(np.r_[True,o[:-1]!=o[1:],True])
{o[i]:l[i:j] for (i,j) in zip(idx[:-1],idx[1:])}
1.44 ms ± 243 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
*******************
%%timeit -n10 @Scott
grp = np.cumsum(np.unique(o, return_counts=True)[1])
arr = np.stack(np.split(l, grp)[:-1])
{n: k for n, k in enumerate(arr.tolist())}
38.5 ms ± 699 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit -n10 @ shubham 2
data = df.groupby('order', sort=False)['letters'].agg(tuple).to_dict()
118 ms ± 3.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit -n10 @shubham 1
data = df.groupby('order')['letters'].agg(list).to_dict()
177 ms ± 4.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit -n10 @anky 1
d = (dict([*chain(*map(dict.items,[{k:[*zip(*g)][1] }
for k,g in groupby(zip(o,l),itemgetter(0))]))]))
636 ms ± 23.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit -n10 @ anky 2
_ = dict([(k,list(zip(*g))[1]) for k,g in groupby(zip(o,l),itemgetter(0))])
659 ms ± 36.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%%timeit -n10 @Ch3ster
new = defaultdict(list)
for k,v in zip(o, l):
new[k].append(v)
602 ms ± 1.56 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With