Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Has anyone actually implemented a Fibonacci-Heap efficiently?

People also ask

Is Fibonacci heap practical?

The graphs of the experimental results indicate that Fibonacci heaps slightly out-performed binary heaps in terms of total comparisons, suggesting that Fibonacci heaps would perform better in a situation where comparison cost exceeds overhead.

Why are Fibonacci heaps not used?

Worst case. Although Fibonacci heaps look very efficient, they have the following two drawbacks: They are complicated when it comes to implementing them. They are not as efficient in practice when compared with the theoretically less efficient forms of heaps.

What is the time complexity of Fibonacci heap?

A Fibonacci heap is a heap data structure similar to the binomial heap. It uses Fibonacci numbers and also used to implement the priority queue element in Dijkstra's shortest path algorithm which reduces the time complexity from O(m log n) to O(m + n log n), giving the algorithm a huge boost in terms of running time.

Is binomial heap a Fibonacci heap?

In Fibonacci Heap, trees can have any shape even all trees can be single nodes (This is unlike Binomial Heap where every tree has to be a Binomial Tree). Fibonacci Heap maintains a pointer to a minimum value (which is the root of a tree).


The Boost C++ libraries include an implementation of Fibonacci heaps in boost/pending/fibonacci_heap.hpp. This file has apparently been in pending/ for years and by my projections will never be accepted. Also, there have been bugs in that implementation, which were fixed by my acquaintance and all-around cool guy Aaron Windsor. Unfortunately, most of the versions of that file that I could find online (and the one in Ubuntu's libboost-dev package) still had the bugs; I had to pull a clean version from the Subversion repository.

Since version 1.49 Boost C++ libraries added a lot of new heaps structs including fibonacci heap.

I was able to compile dijkstra_heap_performance.cpp against a modified version of dijkstra_shortest_paths.hpp to compare Fibonacci heaps and binary heaps. (In the line typedef relaxed_heap<Vertex, IndirectCmp, IndexMap> MutableQueue, change relaxed to fibonacci.) I first forgot to compile with optimizations, in which case Fibonacci and binary heaps perform about the same, with Fibonacci heaps usually outperforming by an insignificant amount. After I compiled with very strong optimizations, binary heaps got an enormous boost. In my tests, Fibonacci heaps only outperformed binary heaps when the graph was incredibly large and dense, e.g.:

Generating graph...10000 vertices, 20000000 edges.
Running Dijkstra's with binary heap...1.46 seconds.
Running Dijkstra's with Fibonacci heap...1.31 seconds.
Speedup = 1.1145.

As far as I understand, this touches at the fundamental differences between Fibonacci heaps and binary heaps. The only real theoretical difference between the two data structures is that Fibonacci heaps support decrease-key in (amortized) constant time. On the other hand, binary heaps get a great deal of performance from their implementation as an array; using an explicit pointer structure means Fibonacci heaps suffer a huge performance hit.

Therefore, to benefit from Fibonacci heaps in practice, you have to use them in an application where decrease_keys are incredibly frequent. In terms of Dijkstra, this means that the underlying graph is dense. Some applications could be intrinsically decrease_key-intense; I wanted to try the Nagomochi-Ibaraki minimum-cut algorithm because apparently it generates lots of decrease_keys, but it was too much effort to get a timing comparison working.

Warning: I may have done something wrong. You may wish to try reproducing these results yourself.

Theoretical note: The improved performance of Fibonacci heaps on decrease_key is important for theoretical applications, such as Dijkstra's runtime. Fibonacci heaps also outperform binary heaps on insertion and merging (both amortized constant-time for Fibonacci heaps). Insertion is essentially irrelevant, because it doesn't affect Dijkstra's runtime, and it's fairly easy to modify binary heaps to also have insert in amortized constant time. Merge in constant time is fantastic, but not relevant to this application.

Personal note: A friend of mine and I once wrote a paper explaining a new priority queue, which attempted to replicate the (theoretical) running time of Fibonacci heaps without their complexity. The paper was never published, but my coauthor did implement binary heaps, Fibonacci heaps, and our own priority queue to compare the data structures. The graphs of the experimental results indicate that Fibonacci heaps slightly out-performed binary heaps in terms of total comparisons, suggesting that Fibonacci heaps would perform better in a situation where comparison cost exceeds overhead. Unfortunately, I do not have the code available, and presumably in your situation comparison is cheap, so these comments are relevant but not directly applicable.

Incidentally, I highly recommend trying to match the runtime of Fibonacci heaps with your own data structure. I found that I simply reinvented Fibonacci heaps myself. Before I thought that all of the complexities of Fibonacci heaps were some random ideas, but afterward I realized that they were all natural and fairly forced.


Knuth did a comparison between fibonacci heap and binary heaps for minimum spanning trees back in 1993 for his book Stanford Graphbase. He found fibonacci to be 30 to 60 precent slower than binary heaps at the graph sizes he was testing, 128 vertices at different densities.

The source code is in C (or rather CWEB which is a cross between C, math and TeX) in the section MILES_SPAN.


Disclaimer

I know results are quite similar and "looks like running time is totally dominated by something other than the heap" (@Alpedar). But I couldn't find anything evidence of that in the code. The code it's open, so if you can find anything that can affect the result of the test, please tell me.


Maybe I did something wrong, but i wrote a test, based on A.Rex anwser comparing:

  • Fibonacci-Heap
  • D-Ary-heap (4)
  • Binary-Heap
  • Relaxed-Heap

The execution times (for complete graphs only) for all heaps were very close. The test were made for complete graphs with 1000,2000,3000,4000,5000,6000,7000 and 8000 vertices. For each test 50 random graphs were generated and the output is the average time of each heap:

Sorry about the output, it's not very verbose because i needed it to build some charts from text files.


Here are the results (in seconds):

heap result table