Given a list of numbers, like this:
lst = [0, 10, 15, 17]
I'd like a list that has elements from i -> i + 3
for all i
in lst
. If there are overlapping ranges, I'd like them merged.
So, for the example above, we first get:
[0, 1, 2, 3, 10, 11, 12, 13, 15, 16, 17, 18, 17, 18, 19, 20]
But for the last 2 groups, the ranges overlap, so upon merging them, you have:
[0, 1, 2, 3, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20]
This is my desired output.
This is what I've thought of:
from collections import OrderedDict
res = list(OrderedDict.fromkeys([y for x in lst for y in range(x, x + 4)]).keys())
print(res) = [0, 1, 2, 3, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20]
However, this is slow (10000 loops, best of 3: 56 µs per loop
). I'd like a numpy solution if possible, or a python solution that's faster than this.
The remove() method removes the first matching element (which is passed as an argument) from the list. The pop() method removes an element at a given index, and will also return the removed item. You can also use the del keyword in Python to remove an element or slice from a list.
This is indeed done by indentation. So in your example, statement 1 is in the for-loop, statement 2 isn't. You can use spaces and tabs as indentation, as long as you are using the same thing everywhere in the code.
Approach #1 : One approach based on broadcasted
summation and then using np.unique
to get unique numbers -
np.unique(np.asarray(lst)[:,None] + np.arange(4))
Approach #2 : Another based on broadcasted summation and then masking -
def mask_app(lst, interval_len = 4):
arr = np.array(lst)
r = np.arange(interval_len)
ranged_vals = arr[:,None] + r
a_diff = arr[1:] - arr[:-1]
valid_mask = np.vstack((a_diff[:,None] > r, np.ones(interval_len,dtype=bool)))
return ranged_vals[valid_mask]
Runtime test
Original approach -
from collections import OrderedDict
def org_app(lst):
list(OrderedDict.fromkeys([y for x in lst for y in range(x, x + 4)]).keys())
Timings -
In [409]: n = 10000
In [410]: lst = np.unique(np.random.randint(0,4*n,(n))).tolist()
In [411]: %timeit org_app(lst)
...: %timeit np.unique(np.asarray(lst)[:,None] + np.arange(4))
...: %timeit mask_app(lst, interval_len = 4)
...:
10 loops, best of 3: 32.7 ms per loop
1000 loops, best of 3: 1.03 ms per loop
1000 loops, best of 3: 671 µs per loop
In [412]: n = 100000
In [413]: lst = np.unique(np.random.randint(0,4*n,(n))).tolist()
In [414]: %timeit org_app(lst)
...: %timeit np.unique(np.asarray(lst)[:,None] + np.arange(4))
...: %timeit mask_app(lst, interval_len = 4)
...:
1 loop, best of 3: 350 ms per loop
100 loops, best of 3: 14.7 ms per loop
100 loops, best of 3: 9.73 ms per loop
The bottleneck with the two posted approaches seems like is with the conversion to array
, though that seems to be paying off well afterwards. Just to give a sense of the time spent on the conversion for the last dataset -
In [415]: %timeit np.array(lst)
100 loops, best of 3: 5.6 ms per loop
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With