Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Clustering 500,000 geospatial points in python

I'm currently faced with the problem of finding a way to cluster around 500,000 latitude/longitude pairs in python. So far I've tried computing a distance matrix with numpy (to pass into the scikit-learn DBSCAN) but with such a large input it quickly spits out a Memory Error.

The points are stored in tuples containing the latitude, longitude, and the data value at that point.

In short, what is the most efficient way to spatially cluster a large number of latitude/longitude pairs in python? For this application, I'm willing to sacrifice some accuracy in the name of speed.

Edit: The number of clusters for the algorithm to find is unknown ahead of time.

like image 366
user3681226 Avatar asked Jun 03 '14 20:06

user3681226


Video Answer


1 Answers

Older versions of DBSCAN in scikit learn would compute a complete distance matrix.

Unfortunately, computing a distance matrix needs O(n^2) memory, and that is probably where you run out of memory.

Newer versions (which version do you use?) of scikit learn should be able to work without a distance matrix; at least when using an index. At 500.000 objects, you do want to use index acceleration, as this reduces runtime from O(n^2) to O(n log n).

I don't know how well scikit learn supports geodetic distance in its indexes though. ELKI is the only tool I know that can use R*-tree indexes for accelerating geodetic distance; making it extremely fast for this task (in particular when bulk-loading the index). You should give it a try.

Have a look at the Scikit learn indexing documentation, and try setting algorithm='ball_tree'.

like image 155
Has QUIT--Anony-Mousse Avatar answered Oct 14 '22 01:10

Has QUIT--Anony-Mousse