Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best hash function for mixed numeric and literal identifiers

For performance reasons I have a need to split a set of objects identified by a string into groups. Objects may be either identified by a number or by a string in prefixed (qualified) form with dots separating parts of the identifier:

12
323
12343
2345233
123123131
ns1:my.label.one
ns1:my.label.two
ns1:my.label.three
ns1:system.text.one
ns2:edit.box.grey
ns2:edit.box.black
ns2:edit.box.mixed

Numeric identifiers are from 1 to several millions. Text identifiers are most likely to have very many starting with the same name space prefix (ns1:) and with the same path prefix (edit.box.).

What is the best hash function for this purpose? It would be good if I can predict somehow the size of the bucket based on object identifier statistics. Are there some good articles for constructing good hash function based on some statistical information?

There are several millions of such identifiers, but the purpose is to split them into groups of 1-2 thousands based on the hash function.

like image 390
Andrey Adamovich Avatar asked Dec 14 '09 16:12

Andrey Adamovich


1 Answers

Two good hash functions can both be mapped into the same space of values, and will in general not cause any new problems as a result of combining them.

So your hash function can look like this:

if it's an integer value:
    return int_hash(integer value)
return string_hash(string value)

Unless there's any clumping of your integers around certain values modulo N, where N is a possible number of buckets, then int_hash can just return its input.

Picking a string hash is not a novel problem. Try "djb2" (http://www.cse.yorku.ca/~oz/hash.html) or similar, unless you have obscene performance requirements.

I don't think there's much point in modifying the hash function to take account of the common prefixes. If your hash function is good to start with, then it is unlikely that common prefixes will create any clumping of hash values.

If you do this, and the hash doesn't unexpectedly perform badly, and you put your several million hash values into a few thousand buckets, then the bucket populations will be normally distributed, with mean (several million / a few thousand) and variance 1/12 (a few thousand)^2

With an average of 1500 entries per bucket, that makes the standard deviation somewhere around 430. 95% of a normal distribution lies within 2 standard deviations of the mean, so 95% of your buckets will contain 640-2360 entries, unless I've done my sums wrong. Is that adequate, or do you need the buckets to be of more closely similar sizes?

like image 174
Steve Jessop Avatar answered Oct 09 '22 09:10

Steve Jessop