Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

python data type to track duplicates

I often keep track of duplicates with something like this:

processed = set() 
for big_string in strings_generator:
    if big_string not in processed:
        processed.add(big_string)
        process(big_string)

I am dealing with massive amounts of data so don't want to maintain the processed set in memory. I have a version that uses sqlite to store the data on disk, but then this process runs much slower.

To cut down on memory use what do you think of using hashes like this:

processed = set() 
for big_string in string_generator:
    key = hash(big_string)
    if key not in ignored:
        processed.add(key)
        process(big_string)    

The drawback is I could lose data through occasional hash collisions. 1 collision in 1 billion hashes would not be a problem for my use.

I tried the md5 hash but found generating the hashes became a bottleneck.

What would you suggest instead?

like image 563
hoju Avatar asked Dec 12 '10 22:12

hoju


2 Answers

I'm going to assume you are hashing web pages. You have to hash at most 55 billion web pages (and that measure almost certainly overlooks some overlap).

You are willing to accept a less than one in a billion chance of collision, which means that if we look at a hash function which number of collisions is close to what we would get if the hash was truly random[ˆ1], we want a hash range of size (55*10ˆ9)*10ˆ9. That is log2((55*10ˆ9)*10ˆ9) = 66 bits.

[ˆ1]: since the hash can be considered to be chosen at random for this purpose, p(collision) = (occupied range)/(total range)

Since there is a speed issue, but no real cryptographic concern, we can use a > 66-bits non-cryptographic hash with the nice collision distribution property outlined above.

It looks like we are looking for the 128-bit version of the Murmur3 hash. People have been reporting speed increases upwards of 12x comparing Murmur3_128 to MD5 on a 64-bit machine. You can use this library to do your speed tests. See also this related answer, which:

  • shows speed test results in the range of python's str_hash, which speed you have already deemed acceptable elsewhere – though python's hash is a 32-bit hash leaving you only 2ˆ32/(10ˆ9) (that is only 4) values stored with a less than one in a billion chance of collision.
  • spawned a library of python bindings that you should be able to use directly.

Finally, I hope to have outlined the reasoning that could allow you to compare with other functions of varied size should you feel the need for it (e.g. if you up your collision tolerance, if the size of your indexed set is smaller than the whole Internet, etc, ...).

like image 73
Francois G Avatar answered Sep 21 '22 12:09

Francois G


You have to decide which is more important: space or time.

If time, then you need to create unique representations of your large_item which take as little space as possible (probably some str value) that is easy (i.e. quick) to calculate and will not have collisions, and store them in a set.

If space, find the quickest disk-backed solution you can and store the smallest possible unique value that will identify a large_item.

So either way, you want small unique identifiers -- depending on the nature of large_item this may be a big win, or not possible.

Update

they are strings of html content

Perhaps a hybrid solution then: Keep a set in memory of the normal Python hash, while also keeping the actual html content on disk, keyed by that hash; when you check to see if the current large_item is in the set and get a positive, double-check with the disk-backed solution to see if it's a real hit or not, then skip or process as appropriate. Something like this:

import dbf
on_disk = dbf.Table('/tmp/processed_items', 'hash N(17,0); value M')
index = on_disk.create_index(lambda rec: rec.hash)

fast_check = set()
def slow_check(hashed, item):
    matches = on_disk.search((hashed,))
    for record in matches:
        if item == record.value:
            return True
    return False

for large_item in many_items:
    hashed = hash(large_item) # only calculate once
    if hashed not in fast_check or not slow_check(hashed, large_item):
        on_disk.append((hashed, large_item))
        fast_check.add(hashed)
        process(large_item)    

FYI: dbf is a module I wrote which you can find on PyPI

like image 34
Ethan Furman Avatar answered Sep 21 '22 12:09

Ethan Furman