Which models of algorithm running time exist?
We all expect mergesort to be faster than bublesort, and note that mergesort makes O(n log n) comparisons vs. O(n2) for bubblesort.
For other algorithms, you count other operations (than compares and swaps), such as pointer dereference, array lookup, arithmetic on fixed-size integers, etc.
What other ways to model execution time are there?
One I know of myself is counting the number of block read from and written to disk; see my answer to When does Big-O notation fail? for a lengthy description.
Another is counting the number of cache misses. This expands on the I/O model by looking at all levels of the memory hierarchy.
A third, for distributed algorithms (such as in secure multiparty computation) is to count the amount of data transmitted across the network (commonly measured in rounds of communication or number of messages).
Which other interesting things are there to count (and not count!) in order to predict the performance of an algorithm?
Also, how good are these models? As far as I've heard, cache oblivious algorithms are competitive with I/O-efficient algorithms for data on disk, but not for in-memory algorithms.
In particular: in which specific instances do these models mispredict relative performance? According to my own experiments, Fibonacci heaps don't speed up Dijstra's shortest path (versus binary heaps) when the data is small enough to fit in memory.
You have to define a computational model, give an estimation of the cost of each operation and then analyse your algorithm in term of those costs; of course, the costs are determined by the particular environment and the characteristics of the underlying machine where you want to deploy your algorithm, so the question is really too generic.
In an algorithm course, we just assume that each operation costs 1, so we just count how many times we loop; in algorithms that works with main memory, we assume that each operations, apart read/write from I/O, costs 0 (and read/write 1), and so on.
Are those models tight with the reality? It depends on the reality: your environment and your machine.
Your calculation with cache misses could be correct on a core duo, but wrong with a cell processor, where you have to transfer manually the contents of the memory of SPEs, for example.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With