I'm currently faced with the problem of finding a way to cluster around 500,000 latitude/longitude pairs in python. So far I've tried computing a distance matrix with numpy (to pass into the scikit-learn DBSCAN) but with such a large input it quickly spits out a Memory Error.
The points are stored in tuples containing the latitude, longitude, and the data value at that point.
In short, what is the most efficient way to spatially cluster a large number of latitude/longitude pairs in python? For this application, I'm willing to sacrifice some accuracy in the name of speed.
Edit: The number of clusters for the algorithm to find is unknown ahead of time.
Older versions of DBSCAN in scikit learn would compute a complete distance matrix.
Unfortunately, computing a distance matrix needs O(n^2)
memory, and that is probably where you run out of memory.
Newer versions (which version do you use?) of scikit learn should be able to work without a distance matrix; at least when using an index. At 500.000 objects, you do want to use index acceleration, as this reduces runtime from O(n^2)
to O(n log n)
.
I don't know how well scikit learn supports geodetic distance in its indexes though. ELKI is the only tool I know that can use R*-tree indexes for accelerating geodetic distance; making it extremely fast for this task (in particular when bulk-loading the index). You should give it a try.
Have a look at the Scikit learn indexing documentation, and try setting algorithm='ball_tree'
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With