I have a (large) data array and a (large) list of lists of (a few) indices, e.g.,
data = [1.0, 10.0, 100.0]
contribs = [[1, 2], [0], [0, 1]]
For each entry in contribs
, I'd like to sum up the corresponding values of data
and put those into an array. For the above example, the expected result would be
out = [110.0, 1.0, 11.0]
Doing this in a loop works,
c = numpy.zeros(len(contribs))
for k, indices in enumerate(contribs):
for idx in indices:
c[k] += data[idx]
but since data
and contribs
are large, it's taking too long.
I have the feeling this can be improved using numpy's fancy indexing.
Any hints?
One possibility would be
data = np.array(data)
out = [np.sum(data[c]) for c in contribs]
Should be faster than the double loop, at least.
Here's an almost vectorized * approach -
# Get lengths of list element in contribs and the cumulative lengths
# to be used for creating an ID array later on.
clens = np.cumsum([len(item) for item in contribs])
# Setup ID array that corresponds to same ID for same list element in contribs.
# These IDs would be used to accumulate values from a corresponnding array
# that is created by indexing into data array with a flattened contribs
id_arr = np.zeros(clens[-1],dtype=int)
id_arr[clens[:-1]] = 1
out = np.bincount(id_arr.cumsum(),np.take(data,np.concatenate(contribs)))
This approach involves some setting up work. So, the benefits would be hopefully seen when fed with decent sized input arrays and a decent number of list elements in contribs
, which would correspond to the looping in an otherwise loopy solution.
*Please note that this coined as almost vectorized because the only looping performed here is at the start, where we are getting the lengths of the list elements. But that part not being so computationally demanding should have minimal effect on the total runtime.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With