Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I test that my hash function is good in terms of max-load?

I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins):

  1. Probability that a bin is empty, for large n is 1/e.
  2. Expected number of empty bins is n/e.
  3. Probability that a bin has k balls is <= 1/ek! (corrected).
  4. Probability that a bin has at least k collisions is <= ((e/k)**k)/e (corrected).

These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely.

Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere.

Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test?

This lecture seems to cover it best, but I am no mathematician.

http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf

BTW, with high probability seems to mean 1 - 1/n.

like image 685
philcolbourn Avatar asked Oct 15 '22 07:10

philcolbourn


2 Answers

That is a fascinating paper/lecture-- makes me wish I had taken some formal algorithms class.

I'm going to take a stab at some answers here, based on what I've just read from that, and feel free to vote me down. I'd appreciate a correction, though, rather than just a downvote :) I'm also going to use n and N interchangeably here, which is a big no-no in some circles, but since I'm just copy-pasting your formulae, I hope you'll forgive me.

First, the base of the logs. These numbers are given as big-O notation, not as absolute formulae. That means that you're looking for something 'on the order of ln(n) / ln(ln(n))', not with an expectation of an absolute answer, but more that as n gets bigger, the relationship of n to the maximum number of collisions should follow that formula. The details of the actual curve you can graph will vary by implementation (and I don't know enough about the practical implementations to tell you what's a 'good' curve, except that it should follow that big-O relationship). Those two formulae that you posted are actually equivalent in big-O notation. The 3 in the second formula is just a constant, and is related to a particular implementation. A less efficient implementation would have a bigger constant.

With that in mind, I would run empirical tests, because I'm a biologist at heart and I was trained to avoid hard-and-fast proofs as indications of how the world actually works. Start with N as some number, say 100, and find the bin with the largest number of collisions in it. That's your max-load for that run. Now, your examples should be as close as possible to what you expect actual users to use, so maybe you want to randomly pull words from a dictionary or something similar as your input.

Run that test many times, at least 30 or 40. Since you're using random numbers, you'll need to satisfy yourself that the average max-load you're getting is close to the theoretical 'expectation' of your algorithm. Expectation is just the average, but you'll still need to find it, and the tighter your std dev/std err about that average, the more you can say that your empirical average matches the theoretical expectation. One run is not enough, because a second run will (most likely) give a different answer.

Then, increase N, to say, 1000, 10000, etc. Increase it logarithmically, because your formula is logarithmic. As your N increases, your max-load should increase on the order of ln(n) / ln(ln(n)). If it increases at a rate of 3*ln(n) / ln(ln(n)), that means that you're following the theory that they put forth in that lecture.

This kind of empirical test will also show you where your approach breaks down. It may be that your algorithm works well for N < 10 million (or some other number), but above that, it starts to collapse. Why could that be? Maybe you have some limitation to 32 bits in your code without realizing it (ie, using a 'float' instead of a 'double'), or some other implementation detail. These kinds of details let you know where your code will work well in practice, and then as your practical needs change, you can modify your algorithm. Maybe making the algorithm work for very large datasets makes it very inefficient for very small ones, or vice versa, so pinpointing that tradeoff will help you further characterize how you could adapt your algorithm to particular situations. Always a useful skill to have.

EDIT: a proof of why the base of the log function doesn't matter with big-O notation:

log N = log_10 (N) = log_b (N)/log_b (10)= (1/log_b(10)) * log_b(N)

1/log_b(10) is a constant, and in big-O notation, constants are ignored. Base changes are free, which is why you're encountering such variation in the papers.

like image 81
mmr Avatar answered Oct 20 '22 18:10

mmr


Here is a rough start to the solution of this problem involving uniform distributions and maximum load.

Instead of bins and balls or urns or boxes or buckets or m and n, people (p) and doors (d) will be used as designations.

There is an exact expected value for each of the doors given a certain number of people. For example, with 5 people and 5 doors, the expected maximum door is exactly 1.2864 {(1429-625) / 625} above the mean (p/d) and the minimum door is exactly -0.9616 {(24-625) / 625} below the mean. The absolute value of the highest door's distance from the mean is a little larger than the smallest door's because all of the people could go through one door, but no less than zero can go through one of the doors. With large numbers of people (p/d > 3000), the difference between the absolute value of the highest door's distance from the mean and the lowest door's becomes negligible.

For an odd number of doors, the center door is essentially zero and is not scalable, but all of the other doors are scalable from certain values representing p=d. These rounded values for d=5 are:

-1.163 -0.495 0* 0.495 1.163 * slowly approaching zero from -0.12

From these values, you can compute the expected number of people for any count of people going through each of the 5 doors, including the maximum door. Except for the middle ordered door, the difference from the mean is scalable by sqrt(p/d).

So, for p=50,000 and d=5:
Expected number of people going through the maximum door, which could be any of the 5 doors, = 1.163 * sqrt(p/d) + p/d. = 1.163 * sqrt(10,000) + 10,000 = 10,116.3 For p/d < 3,000, the result from this equation must be slightly increased.

With more people, the middle door slowly becomes closer and closer to zero from -0.11968 at p=100 and d=5. It can always be rounded up to zero and like the other 4 doors has quite a variance.

The values for 6 doors are: -1.272 -0.643 -0.202 0.202 0.643 1.272

For 1000 doors, the approximate values are: -3.25, -2.95, -2.79 … 2.79, 2.95, 3.25

For any d and p, there is an exact expected value for each of the ordered doors. Hopefully, a good approximation (with a relative error < 1%) exists. Some professor or mathematician somewhere must know.

For testing uniform distribution, you will need a number of averaged ordered sessions (750-1000 works well) rather than a greater number of people. No matter what, the variances between valid sessions are great. That's the nature of randomness. Collisions are unavoidable. *

The expected values for 5 and 6 doors were obtained by sheer brute force computation using 640 bit integers and averaging the convergence of the absolute values of corresponding opposite doors. For d=5 and p=170: -6.63901 -2.95905 -0.119342 2.81054 6.90686 (27.36099 31.04095 33.880658 36.81054 40.90686) For d=6 and p=108: -5.19024 -2.7711 -0.973979 0.734434 2.66716 5.53372 (12.80976 15.2289 17.026021 18.734434 20.66716 23.53372)

I hope that you may evenly distribute your data.

  • It's almost guaranteed that all of George Foreman's sons or some similar situation will fight against your hash function. And proper contingent planning is the work of all good programmers.
like image 45
C. G. Neal Avatar answered Oct 20 '22 17:10

C. G. Neal