I want to select only certain rows from a NumPy array based on the value in the second column. For example, this test array has integers from 1 to 10 in the second column.
>>> test = numpy.array([numpy.arange(100), numpy.random.randint(1, 11, 100)]).transpose() >>> test[:10, :] array([[ 0, 6], [ 1, 7], [ 2, 10], [ 3, 4], [ 4, 1], [ 5, 10], [ 6, 6], [ 7, 4], [ 8, 6], [ 9, 7]])
If I wanted only rows where the second value is 4, it is easy:
>>> test[test[:, 1] == 4] array([[ 3, 4], [ 7, 4], [16, 4], ... [81, 4], [83, 4], [88, 4]])
But how do I achieve the same result when there is more than one wanted value?
The wanted list can be of arbitrary length. For example, I may want all rows where the second column is either 2, 4 or 6:
>>> wanted = [2, 4, 6]
The only way I have come up with is to use list comprehension and then convert this back into an array and seems too convoluted, although it works:
>>> test[numpy.array([test[x, 1] in wanted for x in range(len(test))])] array([[ 0, 6], [ 3, 4], [ 6, 6], ... [90, 2], [91, 6], [92, 2]])
Is there a better way to do this in NumPy itself that I am missing?
If you to select the last element of the array, you can use index [11] , as you know that indexing in Python begins with [0] .
To select an element from Numpy Array , we can use [] operator i.e. It will return the element at given index only.
Use len(arr) to find the number of row from 2d array. To find the number columns use len(arr[0]). Now total number of elements is rows * columns.
The following solution should be faster than Amnon's solution as wanted
gets larger:
# Much faster look up than with lists, for larger lists: wanted_set = set(wanted) @numpy.vectorize def selected(elmt): return elmt in wanted_set # Or: selected = numpy.vectorize(wanted_set.__contains__) print test[selected(test[:, 1])]
In fact, it has the advantage of searching through the test
array only once (instead of as many as len(wanted)
times as in Amnon's answer). It also uses Python's built-in fast element look up in sets, which are much faster for this than lists. It is also fast because it uses Numpy's fast loops. You also get the optimization of the in
operator: once a wanted
element matches, the remaining elements do not have to be tested (as opposed to the "logical or" approach of Amnon, were all the elements in wanted
are tested no matter what).
Alternatively, you could use the following one-liner, which also goes through your array only once:
test[numpy.apply_along_axis(lambda x: x[1] in wanted, 1, test)]
This is much much slower, though, as this extracts the element in the second column at each iteration (instead of doing it in one pass, as in the first solution of this answer).
test[numpy.logical_or.reduce([test[:,1] == x for x in wanted])]
The result should be faster than the original version since NumPy's doing the inner loops instead of Python.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With