What kind of problems on graphs is faster (in terms of big-O) to solve using incidence matrix data structures instead of more widespread adjacency matrices?
The space complexities of the structures are:
Adjacency: O(V^2) Incidence: O(VE)
With the consequence that an incidence structure saves space if there are many more vertices than edges.
You can look at the time complexity of some typical graph operations:
Find all vertices adjacent to a vertex: Adj: O(V) Inc: O(VE)
Check if two vertices are adjacent: Adj: O(1) Inc: O(E)
Count the valence of a vertex: Adj: O(V) Inc: O(E)
And so on. For any given algorithm, you can use building blocks like the above to calculate which representation gives you better overall time complexity.
As a final note, using a matrix of any kind is extremely space-inefficient for all but the most dense of graphs, and I recommend against using either unless you've consciously dismissed alternatives like adjacency lists.
I personally have never found a real application of the incidence matrix representation in a programming contest or research problem. I think that is may be useful for proving some theorems or for some very special problems. One book gives an example of "counting the number of spanning trees" as a problem in which this representation is useful.
Another issue with this representation is that it makes no sense to store it, because it is really easy to compute it dynamically (to answer what given cell contains) from the list of edges.
It may seem more useful in hyper-graphs however, but only if it is dense.
So my opinion is - it is useful only for theoretical works.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With