Does anyone know the answer to this question?
Hashtables seem to be O(1) because they have a small constant factor combined with their 'n' in the O(log(n)) being increased to the point that, for many practical applications, it is independent of the number of actual items you are using.
Hash tables are often used to implement associative arrays, sets and caches. Like arrays, hash tables provide constant-time O(1) lookup on average, regardless of the number of items in the table.
When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.
Is there a theoretical limit for the number of key entries that can be stored in a HashMap or does it purely depend on the heapmemory available ? Looking at the documentation of that class, I would say that the theoretical limit is Integer. MAX_VALUE (231-1 = 2147483647) elements.
Yes. To search a hash map with 100 million items added to it, you do this:
1) Calculate the hash of the object you're looking for.
2) Find that bucket
3) Search through that bucket for the item.
(1) is independent of the size of the hash map or number of items in it.
(2) is O(1), assuming a standard hashmap implemented as an array of linked lists.
(3) takes an amount of time related to the number of items in the bucket, which should be approximately (number of items added to hash) / (number of buckets). This part will start at O(1), but will very slowly increase as the number of items begins to greatly exceed the number of buckets.
For almost any purpose, Hash Maps can be considered O(1) for both insertion and retrieval, even with very large data sets, as long as you start with a sufficiently large number of buckets.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With