Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Minimizing memory usage of a breadth first search

In my the following code, I am traversing a graph through breadth first search. The code constructs the graph while it is traversing. This is a very large graph, with a fan out of 12. Due to this, any time the depth of the breadth first search increases, I want to destruct the layer above it in an attempt to minimize memory usage. How could I design an algorithm to do so?

string Node::bfs(Node * rootNode) {
QQueue<Cube *> q;
q.enqueue(rootNode);

while (!(q.empty())) {
    Node * currNode = q.dequeue();
    currNode->constructChildren();
    foreach (Node * child, currNode->getListOfChildren()) {
        q.enqueue(child);
    }
    if (currNode->isGoalNode()) {
        return currNode->path;
    }
}
like image 889
dfetter88 Avatar asked Oct 13 '22 20:10

dfetter88


1 Answers

With constant fanout and assuming a tree-like graph, the number of nodes that have been visited by a BFS is almost the same as the number of nodes on the fringe. (e.g. in a tree with fanout K, each level n has K^n nodes, and the number of nodes with lower depth than n is also Theta(K^n)).

Hence, storing the fringe will already take up alot of memory. So if memory is a very big problem, an "equivalent" technique such as iterative deepening DFS may be much better.

But if you want to destroy the "visited" nodes, then some way of tracking what has been visited (in the case of a general graph; if it is a tree then there's no problem) needs to be devised. In which case more information on the graph is needed.

EDIT on why iterative deepening DFS is better.

The fringe (unvisited nodes that are to be adjacent to the visited nodes) in a BFS is O(K^n) in size, n being the current depth. The fringe for DFS is O(n) in size.

Iterative deepening DFS has the same fringe size as DFS, and gives the same result as BFS, because it "simulates" BFS.

like image 86
lijie Avatar answered Nov 09 '22 09:11

lijie