As the title states, how expensive are Python dictionaries to handle? Creation, insertion, updating, deletion, all of it.
Asymptotic time complexities are interesting themselves, but also how they compare to e.g. tuples or normal lists.
The fastest way to repeatedly lookup data with millions of entries in Python is using dictionaries. Because dictionaries are the built-in mapping type in Python thereby they are highly optimized.
Python Dictionaries are fast but their memory consumption can also be high at the same time.
It will not display the output because the computer ran out of memory before reaching 2^27. So there is no size limitation in the dictionary.
Dictionary occupies much more space than a list of tuples. Even an empty dict occupies much space as compared to a list of tuples.
dicts
(just like set
s when you don't need to associate a value to each key but simply record if a key is present or absent) are pretty heavily optimized. Creating a dict
from N keys or key/value pairs is O(N)
, fetching is O(1)
, putting is amortized O(1)
, and so forth. Can't really do anything substantially better for any non-tiny container!
For tiny containers, you can easily check the boundaries with timeit
-based benchmarks. For example:
$ python -mtimeit -s'empty=()' '23 in empty' 10000000 loops, best of 3: 0.0709 usec per loop $ python -mtimeit -s'empty=set()' '23 in empty' 10000000 loops, best of 3: 0.101 usec per loop $ python -mtimeit -s'empty=[]' '23 in empty' 10000000 loops, best of 3: 0.0716 usec per loop $ python -mtimeit -s'empty=dict()' '23 in empty' 10000000 loops, best of 3: 0.0926 usec per loop
this shows that checking membership in empty lists or tuples is faster, by a whopping 20-30 nanoseconds, than checking membership in empty sets or dicts; when every nanosecond matters, this info might be relevant to you. Moving up a bit...:
$ python -mtimeit -s'empty=range(7)' '23 in empty' 1000000 loops, best of 3: 0.318 usec per loop $ python -mtimeit -s'empty=tuple(range(7))' '23 in empty' 1000000 loops, best of 3: 0.311 usec per loop $ python -mtimeit -s'empty=set(range(7))' '23 in empty' 10000000 loops, best of 3: 0.109 usec per loop $ python -mtimeit -s'empty=dict.fromkeys(range(7))' '23 in empty' 10000000 loops, best of 3: 0.0933 usec per loop
you see that for 7-items containers (not including the one of interest) the balance of performance has shifted, and now dicts and sets have the advantages by HUNDREDS of nanoseconds. When the item of interest IS present:
$ python -mtimeit -s'empty=range(7)' '5 in empty' 1000000 loops, best of 3: 0.246 usec per loop $ python -mtimeit -s'empty=tuple(range(7))' '5 in empty' 1000000 loops, best of 3: 0.25 usec per loop $ python -mtimeit -s'empty=dict.fromkeys(range(7))' '5 in empty' 10000000 loops, best of 3: 0.0921 usec per loop $ python -mtimeit -s'empty=set(range(7))' '5 in empty' 10000000 loops, best of 3: 0.112 usec per loop
dicts and sets don't gain much, but tuples and list do, even though dicts and set remain vastly faster.
And so on, and so forth -- timeit
makes it trivially easy to run micro-benchmarks (strictly speaking, warranted only for those exceedingly rare situations where nanoseconds DO matter, but, easy enough to do, that it's no big hardship to check for OTHER cases;-).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With