I need to create a lookup function where a (X,Y) pair corresponds to a specific Z value. One major requirement for this is that I need to do it in as close to O(1) complexity as I can. My plan is to use an unordered_map.
I generally do not use a hash table for lookup, as the lookup time has never been important to me. Am I correct in thinking that as long as I built the unordered_map with no collisions, my lookup time will be O(1)?
My concern then is what the complexity becomes if there the key is not present in the unordered map. If I use unordered_map::find():, for example, to determine whether a key is present in my hash table, how will it go about giving me an answer? Does it actually iterate over all the keys?
I greatly appreciate the help.
The time complexity of map operations is O(log n) while for unordered_map, it is O(1) on average.
Insertion performance As you can see, using the unordered_map is substantially faster than the map implementation, even for small numbers of elements.
std::unordered_map is supposedly slow because it has fairly stringent iterator invalidation requirements. In my experience, unless you wring the most performance out of your code as you can, it's not a huge issue; it's generally faster than most casual implementations.
But if we consider the worst case (i.e. when the hashing function is not good), the time complexity for all the operations in an unordered_map is O(n). While the worst case time complexity for all the operations in a map is O(log(n)). Hence, in the worst case scenario, a map is faster than an unordered_map.
The standard more or less requires using buckets for collision resolution, which means that the actual look up time will probably be linear with respect to the number of elements in the bucket, regardless of whether the element is present or not. It's possible to make it O(lg N), but it's not usually done, because the number of elements in the bucket should be small, if the hash table is being used correctly.
To ensure that the number of elements in a bucket is small, you
must ensure that the hashing function is effective. What
effective means depends on the types and values being hashed.
(The MS implementation uses FNV, which is one of the best
generic hashs around, but if you have special knowledge of the
actual data you'll be seeing, you might be able to do better.)
Another thing which can help reduce the number of elements per
bucket is to force more buckets or use a smaller load factor.
For the first, you can pass the minimum initial number of
buckets as an argument to the constructor. If you know the
total number of elements that will be in the map, you can
control the load factor this way. You can also forse a minumum
number of buckets once the table has been filled, by calling
rehash
. Otherwise, there is a function
std::unordered_map<>::max_load_factor
which you can use. It
is not guaranteed to do anything, but in any reasonable
implementation, it will. Note that if you use it on an already
filled unordered_map
, you'll probably have to call
unordered_map<>::rehash
afterwards.
(There are several things I don't understand about the standard
unordered_map: why the load factor is a float
, instead of
double
; why it's not required to have an effect; and why it
doesn't automatically call rehash
for you.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With