Lately I've been using some large dictionaries (large meaning ~thousands of elements) and so I have been employing this little guy:
try:
dict[key]
#do something
except KeyError:
pass
In place of
if key in dict.keys()
#do something
I want to know at which dictionary size the former becomes more efficient than the latter? From my understanding the second method is faster if keys() is small, but at what point does it start becoming better to use the first method?
When you do
if key in dict.keys():
Python 2.x has to do O(N) comparisons to find if key is in the dict or not, because dict.keys() returns a list of keys. (But that is fine in Python 3.x, since it returns a set-like view object, lookup will be faster)
Instead, make use of the constant time lookup like this
if key in dict:
This will compute the hash of the key and look for that in dict.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With