I'm trying to understand precisely the notion of a control dependence graph. Suppose I have the following control flow graph (in DOT notation) :
graph g {
1 -> 2;
2 -> 3;
3 -> 2;
2 -> 4;
1 -> 4
}
It has a unique entry node (1) and a unique exit node (4), and a loop 2 -> 3 -> 2.
My question is: does the control dependence graph for this CFG contain a loop edge from 2 to itself?
Allen & Kennedy's "Optimizing compilers for modern architectures" has an algorithm that produces such a loop edge. However, Muchnick's "Compiler design & implementation"'s algorithm for control dependence does not produce such an edge. Besides, I couldn't find any examples in the literature where a CDG is drawn with such a loop edge. I tend to believe there is no such edge, but according to the formal definition of control dependence and according to Allen & Kennedy's algorithm, it should!
If you can please point me to an example where there is such a loop in a CDG (preferably in a peer-reviewed paper, or some professor's lecture notes, etc), or if you can argue why Allen & Kennedy's algorithm should be incorrect, I'd be glad to know.
The utility of such a dependence graph is determine how to order the operations, right? In that sense, it is not helpful to know that an element depends on itself. You can draw the loops if you like, but what's really important is all the other edges.
According to Ferrante 1987, control-dependence loops can exist. In Case 2 on page 325, the author specifies that
All nodes in the post-dominator tree on the path from A to B, including A and B, should be made control dependent on A. (This case captures loop dependence.)
Thus there would be a loop at node A in this case.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With