From java doc I know that:
Iteration over collection views requires time proportional to the "capacity" of the HashMap instance (the number of buckets) plus its size (the number of key-value mappings). Thus, it's very important not to set the initial capacity too high (or the load factor too low) if iteration performance is important.
Does it mean the time complexity for iteration over HashMap is O(n²)? This question may sound silly but I'm actually a bit confused.
HashMap has complexity of O(1) for insertion and lookup. HashMap allows one null key and multiple null values. HashMap does not maintain any order.
Hashmap best and average case for Search, Insert and Delete is O(1) and worst case is O(n). Hashcode is basically used to distribute the objects systematically, so that searching can be done faster.
The complexity of a Hashmap is O(n+m) because the worst-case scenario is one array element contains the whole linked list, which could occur due to flawed implementation of hashcode function being used as a key.
No, this does not mean that the iteration complexity is O(n2).
When the capacity c
is set automatically, it grows as O(n) with the number of items, not as O(n2) of the items. According to source code, target capacity is computed as follows:
int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);
where loadFactor
is a float
value set to 0.75f
by default.
The paragraph that you quote becomes relevant only when you set the capacity manually. It is telling you that the performance is O(c), not O(n), where c
is the capacity that you set or the capacity computed automatically.
No, it means that the complexity for iteration over a HashMap
is O(n + s), where n is the number of mappings and s is the size. It must iterate linearly over all s buckets and linearly over all n entries.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With