Lookups are faster in dictionaries because Python implements them using hash tables. If we explain the difference by Big O concepts, dictionaries have constant time complexity, O(1) while lists have linear time complexity, O(n).
Use a dictionary when you have a set of unique keys that map to values. Use a list if you have an ordered collection of items. Use a set to store an unordered set of items.
Therefore, the dictionary is faster than a list in Python. It is more efficient to use dictionaries for the lookup of elements as it is faster than a list and takes less time to traverse. Moreover, lists keep the order of the elements while dictionary does not.
So, dictionary is faster because you used a better algorithm. Show activity on this post. The reason is because a dictionary is a lookup, while a list is an iteration. Dictionary uses a hash lookup, while your list requires walking through the list until it finds the result from beginning to the result each time.
Lookups in lists are O(n), lookups in dictionaries are amortized O(1), with regard to the number of items in the data structure. If you don't need to associate values, use sets.
Both dictionaries and sets use hashing and they use much more memory than only for object storage. According to A.M. Kuchling in Beautiful Code, the implementation tries to keep the hash 2/3 full, so you might waste quite some memory.
If you do not add new entries on the fly (which you do, based on your updated question), it might be worthwhile to sort the list and use binary search. This is O(log n), and is likely to be slower for strings, impossible for objects which do not have a natural ordering.
A dict is a hash table, so it is really fast to find the keys. So between dict and list, dict would be faster. But if you don't have a value to associate, it is even better to use a set. It is a hash table, without the "table" part.
EDIT: for your new question, YES, a set would be better. Just create 2 sets, one for sequences ended in 1 and other for the sequences ended in 89. I have sucessfully solved this problem using sets.
set()
is exactly what you want. O(1) lookups, and smaller than a dict.
I did some benchmarking and it turns out that dict is faster than both list and set for large data sets, running python 2.7.3 on an i7 CPU on linux:
python -mtimeit -s 'd=range(10**7)' '5*10**6 in d'
10 loops, best of 3: 64.2 msec per loop
python -mtimeit -s 'd=dict.fromkeys(range(10**7))' '5*10**6 in d'
10000000 loops, best of 3: 0.0759 usec per loop
python -mtimeit -s 'from sets import Set; d=Set(range(10**7))' '5*10**6 in d'
1000000 loops, best of 3: 0.262 usec per loop
As you can see, dict is considerably faster than list and about 3 times faster than set. In some applications you might still want to choose set for the beauty of it, though. And if the data sets are really small (< 1000 elements) lists perform pretty well.
You want a dict.
For (unsorted) lists in Python, the "in" operation requires O(n) time---not good when you have a large amount of data. A dict, on the other hand, is a hash table, so you can expect O(1) lookup time.
As others have noted, you might choose a set (a special type of dict) instead, if you only have keys rather than key/value pairs.
Related:
As a new set of tests to show @EriF89 is still right after all these years:
$ python -m timeit -s "l={k:k for k in xrange(5000)}" "[i for i in xrange(10000) if i in l]"
1000 loops, best of 3: 1.84 msec per loop
$ python -m timeit -s "l=[k for k in xrange(5000)]" "[i for i in xrange(10000) if i in l]"
10 loops, best of 3: 573 msec per loop
$ python -m timeit -s "l=tuple([k for k in xrange(5000)])" "[i for i in xrange(10000) if i in l]"
10 loops, best of 3: 587 msec per loop
$ python -m timeit -s "l=set([k for k in xrange(5000)])" "[i for i in xrange(10000) if i in l]"
1000 loops, best of 3: 1.88 msec per loop
Here we also compare a tuple
, which are known to be faster than lists
(and use less memory) in some use cases. In the case of lookup table, the tuple
faired no better .
Both the dict
and set
performed very well. This brings up an interesting point tying into @SilentGhost answer about uniqueness: if the OP has 10M values in a data set, and it's unknown if there are duplicates in them, then it would be worth keeping a set/dict of its elements in parallel with the actual data set, and testing for existence in that set/dict. It's possible the 10M data points only have 10 unique values, which is a much smaller space to search!
SilentGhost's mistake about dicts is actually illuminating because one could use a dict to correlate duplicated data (in values) into a nonduplicated set (keys), and thus keep one data object to hold all data, yet still be fast as a lookup table. For example, a dict key could be the value being looked up, and the value could be a list of indices in an imaginary list where that value occurred.
For example, if the source data list to be searched was l=[1,2,3,1,2,1,4]
, it could be optimized for both searching and memory by replacing it with this dict:
>>> from collections import defaultdict
>>> d = defaultdict(list)
>>> l=[1,2,3,1,2,1,4]
>>> for i, e in enumerate(l):
... d[e].append(i)
>>> d
defaultdict(<class 'list'>, {1: [0, 3, 5], 2: [1, 4], 3: [2], 4: [6]})
With this dict, one can know:
2 in d
returns True
)d[2]
returns list of indices where data was found in original data list: [1, 4]
)if data are unique set() will be the most efficient, but of two - dict (which also requires uniqueness, oops :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With