Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Clustering ~100,000 Short Strings in Python

I want to cluster ~100,000 short strings by something like q-gram distance or simple "bag distance" or maybe Levenshtein distance in Python. I was planning to fill out a distance matrix (100,000 choose 2 comparisons) and then do hierarchical clustering with pyCluster. But I'm running into some memory problems before even getting off the ground. For example, the distance matrix is too large for numpy.

aa = numpy.zeros((100000, 100000))
ValueError: array is too big.

Does this seem like a reasonable thing to do? Or am I doomed to memory problems in this task? Thanks for your help.

like image 717
135498 Avatar asked Nov 22 '10 02:11

135498


2 Answers

100,000 * 100,000 * 32bits = 40 GBytes, which would be a lot of RAM, so yes, you need to find another way. (And even if you could fit this data into memory, the calculation would take too long.)

One common and easy shortcut is to cluster a small random subset of the data, and after you find the clusters of this subset, just put the rest of the points into the clusters where they fit best.

like image 160
tom10 Avatar answered Oct 22 '22 00:10

tom10


10 billion elements is an awful lot. I don't know from q-grams, but if that matrix is sparse, you could use a 200,000-ish element dict.

like image 2
nmichaels Avatar answered Oct 21 '22 22:10

nmichaels