Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I improve the complexity of a function that sorts a list for each point on it?

The following function:

sortByDist :: (Ord a, Floating a, RealFrac a) => [V2 a] -> Map (V2 a) [V2 a]
sortByDist graph = Map.fromList $ map sort graph where
    sort point = (point, sortBy (comparing (distance point)) graph)

Maps each point P on a list to a list of points ordered by their distance to P. So, for example, sortByDist [a, b, c, d] Map.! b is the list [b,a,c,d] if a is the nearest point to b, c is the 2nd nearest, d is the 3rd.

Since it performs a n * log n sort for each element, the complexity is n^2 * log n. This agrees with benchmarks of the time required to sort a list of N points:

points  time
200     0m0.086s
400     0m0.389s
600     0m0.980s
800     0m1.838s
1000    0m2.994s
1200    0m4.350s
1400    0m6.477s
1600    0m8.726s
3200    0m39.216s    

How much can this be improved theoretically? Is it possible to get it down to N * log N?

like image 818
MaiaVictor Avatar asked Mar 02 '15 08:03

MaiaVictor


1 Answers

As luqui commented, using a quadtree or similar will probably help. Building the tree should take O(n log n): log n passes, each of them O(n) selection and partition. Once you have the tree, you can traverse it to build the lists. The difference between the lists for a node and its children should generally be small, and when some are large, that should tend to force others to be small. Using an adaptive sort (e.g., adaptive merge sort or adaptive splay sort) should thus give good performance, but analyzing the complexity will not be easy. If you want to try to get some sharing, you will have to represent the lists using a sequence type (e.g. Data.Sequence) and then try to figure out relationships between squares at various scales. I have serious doubts about the potential of such an approach to reduce time complexity.

like image 137
2 revs Avatar answered Nov 15 '22 23:11

2 revs