I have an application that would benefit from using A*; however, for legacy reasons, I need it to continue generating exactly the same paths it did before when there are multiple best-paths to choose from.
For example, consider this maze
...X FX.S .... S = start F = finish X = wall . = empty space
with direction-priorities Up; Right; Down; Left. Using breadth-first, we will find the path DLLLU; however, using A* we immediately go left, and end up finding the path LULLD.
I've tried making sure to always expand in the correct direction when breaking ties; and overwriting the PreviousNode
pointers when moving from a more important direction, but neither works in that example. Is there a way to do this?
A* search is optimal if the heuristic is admissible. Admissible makes that whichever node you expand, it makes sure that the current estimate is always smaller than the optimal, so path about to expand maintains a chance to find the optimal path.
You can speed up BFS from a source to a target by doing a bi-directional search. A bi-directional search is basically doing a BFS from the source and from the target at the same time, one step from each - until the two fronts meet each other.
Breadth-first search will always find the shortest path in an unweighted graph. The graph may be cyclic or acyclic.
We say that BFS is the algorithm to use if we want to find the shortest path in an undirected, unweighted graph. The claim for BFS is that the first time a node is discovered during the traversal, that distance from the source would give us the shortest path.
If the original algorithm was BFS, you are looking for the smallest of the shortest paths where "smallest" is according to the lexicographic order induced by some total order Ord
on the edges (and of course "shortest" is according to path length).
The idea of tweaking weights suggested by amit is a natural one, but I don't think it is very practical because the weights would need to have a number of bits comparable to the length of a path to avoid discarding information, which would make the algorithm orders of magnitude slower.
Thankfully this can still be done with two simple and inexpensive modifications to A*:
start
to goal
in this DAG, which is the desired solution.Schematically, classic A* is:
path_length = infinity for every node
path_length[start] = 0
while score(goal) > minimal score of unvisited nodes:
x := any unvisited node with minimal score
mark x as visited
for y in unvisited neighbors of x:
path_length_through_x = path_length[x] + d(x,y)
if path_length[y] > path_length_through_x:
path_length[y] = path_length_through_x
ancestor[y] = x
return [..., ancestor[ancestor[goal]], ancestor[goal], goal]
where score(x)
stands for path_length[x] + heuristic(x, goal)
.
We simply turn the strict while
loop inequality into a non-strict one and add a path reconstruction phase:
path_length = infinity for every node
path_length[start] = 0
while score(goal) >= minimal score of unvisited nodes:
x := any unvisited node with minimal score
mark x as visited
for y in unvisited neighbors of x:
path_length_through_x = path_length[x] + d(x,y)
if path_length[y] > path_length_through_x:
path_length[y] = path_length_through_x
optimal_nodes = [goal]
for every x in optimal_nodes: // note: we dynamically add nodes in the loop
for y in neighbors of x not in optimal_nodes:
if path_length[x] == path_length[y] + d(x,y):
add y to optimal_nodes
path = [start]
x = start
while x != goal:
z = undefined
for y in neighbors of x that are in optimal_nodes:
if path_length[y] == path_length[x] + d(x,y):
z = y if (x,y) is smaller than (x,z) according to Ord
x = z
append x to path
return path
Warning: to quote Knuth, I have only proven it correct, not tried it.
As for the performance impact, it should be minimal: the search loop only visits nodes with a score that is 1 unit higher than classic A*, and the reconstruction phase is quasi-linear in the number of nodes that belong to a shortest path. The impact is smaller if, as you imply, there is only one shortest path in most cases. You can even optimize for this special case e.g. by remembering an ancestor
node as in the classic case, which you set to a special error value when there is more than one ancestor (that is, when path_length[y] == path_length_through_x
). Once the search loop is over, you attempt to retrieve a path through ancestor
as in classic A*; you only need to execute the full path reconstruction if an error value was encountered when building the path.
i would build in the preference on the path order directly into the heuristic function
i would look at the bread-first algorithm first
define a function for every path that the bread-first algorithm chooses:
consider we are running a depth-first algorithm, and it's at n-th depth the previously done decisions by the algo: x_i \in {U,R,D,L} assign U=0,R=1,D=2,L=3
then define:
g(x_1,..,x_n) = sum_{i=1}^n x_i * (1/4)^i
let's fix this step's g value as g' at every step when the algorithm visites a more deeper node than this one, the g() function would be greater.
at every future step when on of {1..n} x_i is changed, it will be greater hence it's true that the g function always raises while running depth-first.
note:if the depth-first algorithm is successfull, it selects the path with the minimal g() value
note: g() < 1 beacuse max(L,R,U,D)=3
adding g to the A*'s heuristic function won't interfere with the shortest path length because min edge length>=1
the first solution an A* modified like this would found would be the one that the depth-first would find
for you example:
h_bread=g(DLLLU) = (23330)_4 * c
h_astar=g(LULLD) = (30332)_4 * c
()_4 is base4 c is a constant (4^{-5})
for you example: h_bread < h_astar
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With