So I have a list of tuples such as this:
[(1,"juca"),(22,"james"),(53,"xuxa"),(44,"delicia")]
I want this list for a tuple whose number value is equal to something.
So that if I do search(53)
it will return the index value of 2
Is there an easy way to do this?
Linear Search - Lists & Tuples Initialize the list or tuple and an element. Iterate over the list or tuple and check for the element.
When it is required to check if two list of tuples are identical, the '==' operator is used. The '==' operator checks to see if two iterables are equal or not.
In python programming, to sort the list of tuples by key we have the sort() method. The sort() is a built-in list method which sorts the elements of a given list.
[i for i, v in enumerate(L) if v[0] == 53]
A generator expression is probably the most performant and simple solution to your problem:
l = [(1,"juca"),(22,"james"),(53,"xuxa"),(44,"delicia")] result = next((i for i, v in enumerate(l) if v[0] == 53), None) # 2
There are several answers that provide a simple solution to this question with list comprehensions. While these answers are perfectly correct, they are not optimal. Depending on your use case, there may be significant benefits to making a few simple modifications.
The main problem I see with using a list comprehension for this use case is that the entire list will be processed, although you only want to find 1 element.
Python provides a simple construct which is ideal here. It is called the generator expression. Here is an example:
# Our input list, same as before l = [(1,"juca"),(22,"james"),(53,"xuxa"),(44,"delicia")] # Call next on our generator expression. next((i for i, v in enumerate(l) if v[0] == 53), None)
We can expect this method to perform basically the same as list comprehensions in our trivial example, but what if we're working with a larger data set? That's where the advantage of using the generator method comes into play. Rather than constructing a new list, we'll use your existing list as our iterable, and use next()
to get the first item from our generator.
Lets look at how these methods perform differently on some larger data sets. These are large lists, made of 10000000 + 1 elements, with our target at the beginning (best) or end (worst). We can verify that both of these lists will perform equally using the following list comprehension:
worst_case = ([(False, 'F')] * 10000000) + [(True, 'T')] print [i for i, v in enumerate(worst_case) if v[0] is True] # [10000000] # 2 function calls in 3.885 seconds # # Ordered by: standard name # # ncalls tottime percall cumtime percall filename:lineno(function) # 1 3.885 3.885 3.885 3.885 so_lc.py:1(<module>) # 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
best_case = [(True, 'T')] + ([(False, 'F')] * 10000000) print [i for i, v in enumerate(best_case) if v[0] is True] # [0] # 2 function calls in 3.864 seconds # # Ordered by: standard name # # ncalls tottime percall cumtime percall filename:lineno(function) # 1 3.864 3.864 3.864 3.864 so_lc.py:1(<module>) # 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Here's my hypothesis for generators: we'll see that generators will significantly perform better in the best case, but similarly in the worst case. This performance gain is mostly due to the fact that the generator is evaluated lazily, meaning it will only compute what is required to yield a value.
# 10000000 # 5 function calls in 1.733 seconds # # Ordered by: standard name # # ncalls tottime percall cumtime percall filename:lineno(function) # 2 1.455 0.727 1.455 0.727 so_lc.py:10(<genexpr>) # 1 0.278 0.278 1.733 1.733 so_lc.py:9(<module>) # 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} # 1 0.000 0.000 1.455 1.455 {next}
best_case = [(True, 'T')] + ([(False, 'F')] * 10000000) print next((i for i, v in enumerate(best_case) if v[0] == True), None) # 0 # 5 function calls in 0.316 seconds # # Ordered by: standard name # # ncalls tottime percall cumtime percall filename:lineno(function) # 1 0.316 0.316 0.316 0.316 so_lc.py:6(<module>) # 2 0.000 0.000 0.000 0.000 so_lc.py:7(<genexpr>) # 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} # 1 0.000 0.000 0.000 0.000 {next}
WHAT?! The best case blows away the list comprehensions, but I wasn't expecting the our worst case to outperform the list comprehensions to such an extent. How is that? Frankly, I could only speculate without further research.
Take all of this with a grain of salt, I have not run any robust profiling here, just some very basic testing. This should be sufficient to appreciate that a generator expression is more performant for this type of list searching.
Note that this is all basic, built-in python. We don't need to import anything or use any libraries.
I first saw this technique for searching in the Udacity cs212 course with Peter Norvig.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With