From JavaDoc of HashMap
:
As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put).
If we have a higher value ,why would it increase the lookup cost ?
Hash table's Load Factor is defined as
n/s, the ratio of the number of stored entries n and the size s of the table's array of buckets.
High performance of hash table is maintained when the number of collisions is low. When the load factor is high, the number of hash buckets needed to store the same number of entries remains lower, thus increasing the probability of collisions.
Here we should first understand what capacity and load factor means:
capacity : this is number of buckets in any hash table at any given point in time.
load factor : The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased
so more the load factor is more occupied a hash table could get before the capacity is increased.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With