I want to cluster ~100,000 short strings by something like q-gram distance or simple "bag distance" or maybe Levenshtein distance in Python. I was planning to fill out a distance matrix (100,000 choose 2 comparisons) and then do hierarchical clustering with pyCluster. But I'm running into some memory problems before even getting off the ground. For example, the distance matrix is too large for numpy.
aa = numpy.zeros((100000, 100000))
ValueError: array is too big.
Does this seem like a reasonable thing to do? Or am I doomed to memory problems in this task? Thanks for your help.
100,000 * 100,000 * 32bits = 40 GBytes, which would be a lot of RAM, so yes, you need to find another way. (And even if you could fit this data into memory, the calculation would take too long.)
One common and easy shortcut is to cluster a small random subset of the data, and after you find the clusters of this subset, just put the rest of the points into the clusters where they fit best.
10 billion elements is an awful lot. I don't know from q-grams, but if that matrix is sparse, you could use a 200,000-ish element dict.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With