Is there an optimisation that lowers the constant factor of the runtime of Floyd-Warshall, if you are guaranteed to have a symmetric adjacency matrix?
After some thought I came up with:
for (int k = 0; k < N; ++k)
for (int i = 0; i < N; ++i)
for (int j = 0; j <= i; ++j)
dist[j][i] = dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j]);
Now of course we both need to show it's correct and faster.
Correctness is harder to prove, since it relies on the proof of Floyd-Warshall's which is non-trivial. A pretty good proof is given here: Floyd-Warshall proof
The input matrix is symmetric. Now the rest of the proof uses a modified Floyd-Warshall's proof to show that the order of the calculations in the 2 inner loops doesn't matter and that the graph stays symmetrical after each step. If we show both of these conditions are true then both algorithms do the same thing.
Let's define dist[i][j][k]
as the distance from i
to j
using using only vertices from the set {0, ..., k}
as intermediate vertices on the path from i
to j
.
dist[i][j][k-1]
is defined as the weight of the edge from i
to j
. If there is no edge in between this weight is taken to be infinity.
Now using the same logic as used in the proof linked above:
dist[i][j][k] = min(dist[i][j][k-1], dist[i][k][k-1] + dist[k][j][k-1])
Now in the calculation of dist[i][k][k]
(and similarly for dist[k][i][k]
):
dist[i][k][k] = min(dist[i][k][k-1], dist[i][k][k-1] + dist[k][k][k-1])
Now since dist[k][k][k-1]
cannot be negative (or we'd have a negative loop in the graph), this means that dist[i][k][k] = dist[i][k][k-1]
. Since if dist[k][k][k-1] = 0
then both parameters are the same, otherwise the first parameter of the min()
is chosen.
So now, because dist[i][k][k] = dist[i][k][k-1]
, when calculating dist[i][j][k]
it doesn't matter if dist[i][k]
or dist[k][j]
already allow k
in their paths. Since dist[i][j][k-1]
is only used for the calculation of dist[i][j][k]
, dist[i][j]
will stay dist[i][j][k-1]
in the matrix until dist[i][j][k]
is calculated. If i
or j
equals k
then the above case applies.
Therefore, the order of the calculations doesn't matter.
Now we need to show that dist[i][j] = dist[j][i]
after all steps of the algorithm.
We start out with a symmetric grid thus dist[a][b] = dist[b][a]
, for all a
and b
.
dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
= min(dist[j][i], dist[k][i] + dist[j][k])
= min(dist[j][i], dist[j][k] + dist[k][i])
= dist[j][i]
Therefore our assignment is both true and it will maintain the invariant that dist[a][b] = dist[b][a]
. Therefore dist[i][j] = dist[j][i]
after all steps of the algorithm
Therefore both algorithms yield the same, correct, result.
Speed is easier to prove. The inner loop is called just over half the number of times it is normally called, so the function is about twice as fast. Just made slightly slower because you still assign the same number of times, but this doesn't matter as min()
is what takes up most of your time.
If you see anything wrong with my proof, however technical, feel free to point it out and I will attempt to fix it.
EDIT:
You can both speed up and save half the memory by changing the loop as such:
for (int k = 0; k < N; ++k) {
for (int i = 0; i < k; ++i)
for (int j = 0; j <= i; ++j)
dist[i][j] = min(dist[i][j], dist[i][k] + dist[j][k]);
for (int i = k; i < N; ++i) {
for (int j = 0; j < k; ++j)
dist[i][j] = min(dist[i][j], dist[k][i] + dist[j][k]);
for (int j = k; j <= i; ++j)
dist[i][j] = min(dist[i][j], dist[k][i] + dist[k][j]);
}
}
This just splits up the above for loops of the optimised algorithm, so it's still correct and it'll likely get the same speed, but uses half the memory.
Thanks to Chris Elion for the idea.
(Using the notation in the pseudo-code in the Wikipedia article) I believe (but haven't tested) that if the edgeCost matrix is symmetric, then the path matrix will also be symmetric after each iteration. Thus you only need to update half of the entries at each iteration.
At a lower level, you only need to store half of the matrix (since d(i,j) = d(j,i)), so you can reduce the amount of memory used, and hopefully reduce the number of cache misses since you'll access the same data multiple times.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With