Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I speed up all_pairs_dijkstra_path_length

I have a large osmnx (networkx) graph and nx.all_pairs_dijkstra_path_length is taking a long time to calculate.

What possibilities are there to speed up the calculation?

like image 466
Suuuehgi Avatar asked Oct 20 '21 16:10

Suuuehgi


Video Answer


1 Answers

import osmnx as ox
import networkx as nx

Let's take this area

coords, dist = (51.5178, 9.9601), 9000
G = ox.graph.graph_from_point( coords, dist=dist, network_type='drive', simplify=True)
G = ox.add_edge_speeds(G)
G = ox.add_edge_travel_times(G)

with nx.all_pairs_dijkstra_path_length as the baseline.

For this let's create the shorthand bench_nx:

bench_nx = lambda G, weight='travel_time': sum((l:=[ d for t in nx.all_pairs_dijkstra_path_length(G, weight=weight) for d in t[1].values() ])) / len(l)
bench_nx(G)
582.2692172953298
%timeit -n 3 -r 2 bench_nx(G)
53.7 s ± 101 ms per loop (mean ± std. dev. of 2 runs, 3 loops each)

Parallelize

def mp_all_pairs_dijkstra_path_length(G, cutoff=None, weight='weight'):
    """
    Multi-core version of
    nx.all_pairs_dijkstra_path_length
    """

    import multiprocessing as mp
    from functools import partial

    f = partial(nx.single_source_dijkstra_path_length,
                G, cutoff=cutoff, weight=weight)

    with mp.Pool(mp.cpu_count()) as p:
        
        lengths = p.map(f, G)
        for n, l in zip(G, lengths):
            yield n, l
bench_mp = lambda G, weight='travel_time': sum((l:=[ d for t in mp_all_pairs_dijkstra_path_length(G, weight=weight) for d in t[1].values() ])) / len(l)
bench_mp(G)
582.2692172953298
%timeit -n 3 -r 2 bench_mp(G)
20.2 s ± 754 ms per loop (mean ± std. dev. of 2 runs, 3 loops each)

The larger the graph gets, the bigger seems to be the advantage here.

graph-tools

Using graph-tool we can speed things up quite a bit more.

graph-tool indexes vertices and edges using ints. Hence I create two dicts here

  1. nx_node -> gt_idx
  2. u,v,k (edge) -> gt_idx

to be able to map from nx to gt.

import graph_tool.all as gt
from collections import defaultdict

G_gt = gt.Graph(directed=G.is_directed())

# "Dict" [idx] = weight
G_gt_weights = G_gt.new_edge_property("double")

# mapping of nx vertices to gt indices
vertices = {}
for node in G.nodes:
    
    v = G_gt.add_vertex()
    vertices[node] = v

# mapping of nx edges to gt edge indices
edges = defaultdict(lambda: defaultdict(dict))

for src, dst, k, data in G.edges(data=True, keys=True):
    
    # Look up the vertex idxs from our vertices mapping and add edge.
    e = G_gt.add_edge(vertices[src], vertices[dst])
    edges[src][dst][k] = e
    
    # Save weights in property map
    G_gt_weights[e] = data['travel_time']

Note: I added the cut-off 1e50 because gt sets non-reachable destinations to distance 1.79e308.

bench_gt = lambda G, weights: sum((l:=[ d for l in gt.shortest_distance(G, weights=weights) for d in l if 0 < d <= 1e50 ])) / len(l)
bench_gt(G_gt, G_gt_weights)
582.4092142257183
%timeit -n 3 -r 2 bench_gt(G_gt, G_gt_weights)
4.76 s ± 27.4 ms per loop (mean ± std. dev. of 2 runs, 3 loops each)

That's at least an 11 fold improvement.

Sub-Sampling

It turns out distance_histogram() is capable of sub-sampling AND runs in parallel if enabled on compilation!

def subsample_APSP(G, weights=G_weight):
    """Estimate mean and error"""

    def sample_mean(samples, G=G, weights=weights):
        """Calculate mean from histogram of samples samples"""
        counts, bins = gt.distance_histogram(G, weight=weights, samples=samples)
        return sum(counts * (.5 + bins[:-1])) / sum(counts)
    
    N_samples = int( round( G.num_vertices() / 2, 0) )
    N = int( round( math.sqrt(N_samples), 0 ) )
    M = int( round( N_samples / N, 0 ) )
    
    out = [ sample_mean(M) for _ in range(N) ]
    
    return sum(out) / len(out), np.std(out) / math.sqrt(N)
print("{:.2f} +- {:.2f}".format(*subsample_APSP(G)))
582.55 +- 2.83

Note: There is a small deviation due to a difference within the first bin. If someone has a clue why, I'd be glad to know! Edit This seems to be a bug in graph-tools.

%timeit subsample_APSP(G)
222 ms ± 58.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

That's another factor 50 and a total speed-up of 241!


Appendix

sub-sampling investigation X axis is the overall number of involved vertices as fraction of the total number of vertices.

like image 133
Suuuehgi Avatar answered Oct 21 '22 01:10

Suuuehgi