I am comparing two algorithms, Prim's and Kruskal's.
I understand the basic concept of time complexity and when the two work best (sparse/dense graphs)
I found this on the Internet, but I am struggling to convert it to English.
dense graph: Prim = O(N2)
Kruskal = O(N2*log(N))
sparse graph: Prim = O(N2)
Kruskal = O(N log(N))
It's a bit of a long shot, but could anyone explain what is going on here?
Prim is O(N^2), where N is the number of vertices.
Kruskal is O(E log E), where E is the number of edges. The "E log E" comes from a good algorithm sorting the edges. You can then process it in linear E time.
In a dense graph, E ~ N^2. So Kruskal would be O( N^2 log N^2 ), which is simply O( N^2 log N ).
OK, here goes. O(N2) (2 = squared) means that the speed of the algorithm for large N varies as the square of N - so twice the size of graph will result in four times the time to compute.
The Kruskal rows are merely simplified, and assume that E = c * N2
. c
here is presumably a constant, that we can assume to be significantly smaller than N as N gets large. You need to know the following laws of logarithms: log(ab) = log a + log b
and log(a^n) = n * log a
. These two combined with the fact that log c << log N (is much less than and can be ignored) should let you understand the simplifications there.
Now, as for the original expressions and where they were derived from, you'd need to check the page you got these from. But I'm assuming that if you're looking at Prim's and Kruskal's then you will be able to understand the derivation, or at least that if you can't my explaining it to you is not actually going to help you in the long run...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With