Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Prim's Algorithm Time Complexity

I was looking at the Wikipedia entry for Prim's algorithm and I noticed that its time complexity with an adjacency matrix is O(V^2) and its time complexity with a heap and adjacency list is O(E lg(V)) where E is the number of edges and V is the number of vertices in the graph.

Since Prim's algorithm is used in denser graphs, E can approach V^2, but when it does, the time complexity with a heap becomes O(V^2 lg(V)) which is greater than O(V^2). Obviously, a heap will improve performance over just searching the array, but the time complexity says otherwise.

How does the algorithm actually slow down with an improvement?

like image 414
kevmo314 Avatar asked Apr 04 '09 02:04

kevmo314


People also ask

What is the time complexity of Prims and Kruskal algorithm?

The time complexity of Prim's algorithm is O(V2). The time complexity of Kruskal's algorithm is O(E log V). In Prim's algorithm, all the graph elements must be connected. Kruskal's algorithm may have disconnected graphs.

Is Prim faster than Kruskal?

Prim's algorithm is significantly faster in the limit when you've got a really dense graph with many more edges than vertices. Kruskal performs better in typical situations (sparse graphs) because it uses simpler data structures.

How can we reduce the time complexity of Prim's algorithm?

Time Complexity of Prim's Algorithm This can be further reduced depending on the data structure we use for implementing the algorithm. Using a binary heap to store the vertices of the input graph ordered by weight will improve the time to select the minimum weight vertex at every step.

What is the time complexity of Kruskal?

The time complexity of this algorithm is O(E log E) or O(E log V), where E is a number of edges and V is a number of vertices.


1 Answers

Even though the heap saves you from searching through the array, it slows down the "update" part of the algorithm: array updates are O(1), while heap updates are O(log(N)).

In essence, you trade speed in one part of the algorithm for speed in another.

No matter what, you'll have to search N times. However, in dense graphs, you'll need to update a lot (~V^2), and in sparse graphs, you don't.

Another example off the top of my head is searching for elements in an array. If you're only doing it once, linear search is the best - but if you do lots of queries, it's better to sort it and use binary search every time.

like image 110
v3. Avatar answered Sep 17 '22 17:09

v3.