Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can hash tables really be O(1)?

People also ask

Do hash tables start at 0 or 1?

hash is reduced to an index – a number between 0, the start of the array, and array_size - 1 , the end of the array – using the modulo (%) operator. You need to count the frequency of all the characters in S .

Why Hashmap is o1?

Hashmap works on principle of hashing and internally uses hashcode as a base, for storing key-value pair. With the help of hashcode, Hashmap distribute the objects across the buckets in such a way that hashmap put the objects and retrieve it in constant time O(1).

What is a hash table 1?

d) A structure used to implement stack and queue. Explanation: A hash table is used to implement associative arrays which has a key-value pair, so the has table maps keys to values.

What is the big O of hash table?

The hash key is calculated in O(1) time complexity as always, and the required location is accessed in O(1). Insertion: In the best case, the key indicates a vacant location and the element is directly inserted into the hash table. So, overall complexity is O(1).


You have two variables here, m and n, where m is the length of the input and n is the number of items in the hash.

The O(1) lookup performance claim makes at least two assumptions:

  • Your objects can be equality compared in O(1) time.
  • There will be few hash collisions.

If your objects are variable size and an equality check requires looking at all bits then performance will become O(m). The hash function however does not have to be O(m) - it can be O(1). Unlike a cryptographic hash, a hash function for use in a dictionary does not have to look at every bit in the input in order to calculate the hash. Implementations are free to look at only a fixed number of bits.

For sufficiently many items the number of items will become greater than the number of possible hashes and then you will get collisions causing the performance rise above O(1), for example O(n) for a simple linked list traversal (or O(n*m) if both assumptions are false).

In practice though the O(1) claim while technically false, is approximately true for many real world situations, and in particular those situations where the above assumptions hold.


You have to calculate the hash, so the order is O(n) for the size of the data being looked up. The lookup might be O(1) after you do O(n) work, but that still comes out to O(n) in my eyes.

What? To hash a single element takes constant time. Why would it be anything else? If you're inserting n elements, then yes, you have to compute n hashes, and that takes linear time... to look an element up, you compute a single hash of what you're looking for, then find the appropriate bucket with that. You don't re-compute the hashes of everything that's already in the hash table.

And unless you have a perfect hash or a large hash table there are probably several items per bucket so it devolves into a small linear search at some point anyway.

Not necessarily. The buckets don't necessarily have to be lists or arrays, they can be any container type, such as a balanced BST. That means O(log n) worst case. But this is why it's important to choose a good hashing function to avoid putting too many elements into one bucket. As KennyTM pointed out, on average, you will still get O(1) time, even if occasionally you have to dig through a bucket.

The trade off of hash tables is of course the space complexity. You're trading space for time, which seems to be the usual case in computing science.


You mention using strings as keys in one of your other comments. You're concerned about the amount of time it takes to compute the hash of a string, because it consists of several chars? As someone else pointed out again, you don't necessarily need to look at all the chars to compute the hash, although it might produce a better hash if you did. In that case, if there are on average m chars in your key, and you used all of them to compute your hash, then I suppose you're right, that lookups would take O(m). If m >> n then you might have a problem. You'd probably be better off with a BST in that case. Or choose a cheaper hashing function.


The hash is fixed size - looking up the appropriate hash bucket is a fixed cost operation. This means that it is O(1).

Calculating the hash does not have to be a particularly expensive operation - we're not talking cryptographic hash functions here. But that's by the by. The hash function calculation itself does not depend on the number n of elements; while it might depend on the size of the data in an element, this is not what n refers to. So the calculation of the hash does not depend on n and is also O(1).


Hashing is O(1) only if there are only constant number of keys in the table and some other assumptions are made. But in such cases it has advantage.

If your key has an n-bit representation, your hash function can use 1, 2, ... n of these bits. Thinking about a hash function that uses 1 bit. Evaluation is O(1) for sure. But you are only partitioning the key space into 2. So you are mapping as many as 2^(n-1) keys into the same bin. using BST search this takes up to n-1 steps to locate a particular key if nearly full.

You can extend this to see that if your hash function uses K bits your bin size is 2^(n-k).

so K-bit hash function ==> no more than 2^K effective bins ==> up to 2^(n-K) n-bit keys per bin ==> (n-K) steps (BST) to resolve collisions. Actually most hash functions are much less "effective" and need/use more than K bits to produce 2^k bins. So even this is optimistic.

You can view it this way -- you will need ~n steps to be able to uniquely distinguish a pair of keys of n bits in the worst case. There is really no way to get around this information theory limit, hash table or not.

However, this is NOT how/when you use hash table!

The complexity analysis assumes that for n-bit keys, you could have O(2^n) keys in the table (e.g. 1/4 of all possible keys). But most if not all of the time we use hash table, we only have a constant number of the n-bit keys in the table. If you only want a constant number of keys in the table, say C is your maximum number, then you could form a hash table of O(C) bins, that guarantees expected constant collision (with a good hash function); and a hash function using ~logC of the n bits in the key. Then every query is O(logC) = O(1). This is how people claim "hash table access is O(1)"/

There are a couple of catches here -- first, saying you don't need all the bits may only be a billing trick. First you cannot really pass the key value to the hash function, because that would be moving n bits in the memory which is O(n). So you need to do e.g. a reference passing. But you still need to store it somewhere already which was an O(n) operation; you just don't bill it to the hashing; you overall computation task cannot avoid this. Second, you do the hashing, find the bin, and found more than 1 keys; your cost depends on your resolution method -- if you do comparison based (BST or List), you will have O(n) operation (recall key is n-bit); if you do 2nd hash, well, you have the same issue if 2nd hash has collision. So O(1) is not 100% guaranteed unless you have no collision (you can improve the chance by having a table with more bins than keys, but still).

Consider the alternative, e.g. BST, in this case. there are C keys, so a balanced BST will be O(logC) in depth, so a search takes O(logC) steps. However the comparison in this case would be an O(n) operation ... so it appears hashing is a better choice in this case.


TL;DR: Hash tables guarantee O(1) expected worst case time if you pick your hash function uniformly at random from a universal family of hash functions. Expected worst case is not the same as average case.

Disclaimer: I don't formally prove hash tables are O(1), for that have a look at this video from coursera [1]. I also don't discuss the amortized aspects of hash tables. That is orthogonal to the discussion about hashing and collisions.

I see a surprisingly great deal of confusion around this topic in other answers and comments, and will try to rectify some of them in this long answer.

Reasoning about worst case

There are different types of worst case analysis. The analysis that most answers have made here so far is not worst case, but rather average case [2]. Average case analysis tends to be more practical. Maybe your algorithm has one bad worst case input, but actually works well for all other possible inputs. Bottomline is your runtime depends on the dataset you're running on.

Consider the following pseudocode of the get method of a hash table. Here I'm assuming we handle collision by chaining, so each entry of the table is a linked list of (key,value) pairs. We also assume the number of buckets m is fixed but is O(n), where n is the number of elements in the input.

function get(a: Table with m buckets, k: Key being looked up)
  bucket <- compute hash(k) modulo m
  for each (key,value) in a[bucket]
    return value if k == key
  return not_found

As other answers have pointed out, this runs in average O(1) and worst case O(n). We can make a little sketch of a proof by challenge here. The challenge goes as follows:

(1) You give your hash table algorithm to an adversary.

(2) The adversary can study it and prepare as long as he wants.

(3) Finally the adversary gives you an input of size n for you to insert in your table.

The question is: how fast is your hash table on the adversary input?

From step (1) the adversary knows your hash function; during step (2) the adversary can craft a list of n elements with the same hash modulo m, by e.g. randomly computing the hash of a bunch of elements; and then in (3) they can give you that list. But lo and behold, since all n elements hash to the same bucket, your algorithm will take O(n) time to traverse the linked list in that bucket. No matter how many times we retry the challenge, the adversary always wins, and that's how bad your algorithm is, worst case O(n).

How come hashing is O(1)?

What threw us off in the previous challenge was that the adversary knew our hash function very well, and could use that knowledge to craft the worst possible input. What if instead of always using one fixed hash function, we actually had a set of hash functions, H, that the algorithm can randomly choose from at runtime? In case you're curious, H is called a universal family of hash functions [3]. Alright, let's try adding some randomness to this.

First suppose our hash table also includes a seed r, and r is assigned to a random number at construction time. We assign it once and then it's fixed for that hash table instance. Now let's revisit our pseudocode.

function get(a: Table with m buckets and seed r, k: Key being looked up)
  rHash <- H[r]
  bucket <- compute rHash(k) modulo m
  for each (key,value) in a[bucket]
    return value if k == key
  return not_found

If we try the challenge one more time: from step (1) the adversary can know all the hash functions we have in H, but now the specific hash function we use depends on r. The value of r is private to our structure, the adversary cannot inspect it at runtime, nor predict it ahead of time, so he can't concoct a list that's always bad for us. Let's assume that in step (2) the adversary chooses one function hash in H at random, he then crafts a list of n collisions under hash modulo m, and sends that for step (3), crossing fingers that at runtime H[r] will be the same hash they chose.

This is a serious bet for the adversary, the list he crafted collides under hash, but will just be a random input under any other hash function in H. If he wins this bet our run time will be worst case O(n) like before, but if he loses then well we're just being given a random input which takes the average O(1) time. And indeed most times the adversary will lose, he wins only once every |H| challenges, and we can make |H| be very large.

Contrast this result to the previous algorithm where the adversary always won the challenge. Handwaving here a bit, but since most times the adversary will fail, and this is true for all possible strategies the adversary can try, it follows that although the worst case is O(n), the expected worst case is in fact O(1).


Again, this is not a formal proof. The guarantee we get from this expected worst case analysis is that our run time is now independent of any specific input. This is a truly random guarantee, as opposed to the average case analysis where we showed a motivated adversary could easily craft bad inputs.


It seems based on discussion here, that if X is the ceiling of (# of elements in table/# of bins), then a better answer is O(log(X)) assuming an efficient implementation of bin lookup.