Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Algorithms for Big O Analysis

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?

like image 253
VarunGupta Avatar asked Feb 23 '09 07:02

VarunGupta


People also ask

What is big O algorithm?

Big O Notation is a tool used to describe the time complexity of algorithms. It calculates the time taken to run an algorithm as the input grows. In other words, it calculates the worst-case time complexity of an algorithm. Big O Notation in Data Structure describes the upper bound of an algorithm's runtime.

What is big O and small O in analysis of algorithms?

Big-Ο is used as a tight upper bound on the growth of an algorithm's effort (this effort is described by the function f(n)), even though, as written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used to describe an upper bound that cannot be tight.

What is Big O notation identifies the best algorithm to solve a problem?

Big O says time taken by best algorithm that solve every case(best, avarage, worst) of problem A completely. in general Big O is "The Best algorithm of worst case input". it also determines the max size of problem that can be solved in given amount of time.


2 Answers

I have (quite) a few examples:

  • The union-find data structure, which supports operations in (amortized) inverse Ackermann time. It's particularly nice because the data structure is incredibly easy to code.
  • Splay trees, which are self-balancing binary trees (that is, no extra information is stored other than the BST -- no red/black information. Amortized analysis was essentially invented to prove bounds for splay trees; splay trees run in amortized logarithmic time, but worst-case linear time. The proofs are cool.
  • Fibonacci heaps, which perform most of the priority queue operations in amortized constant time, thus improving the runtime of Dijkstra's algorithm and other problems. As with splay trees, there are slick "potential function" proofs.
  • Bernard Chazelle's algorithm for computing minimum spanning trees in linear times inverse Ackermann time. The algorithm uses soft heaps, a variant of the traditional priority queue, except that some "corruption" might occur and queries might not be answered correctly.
  • While on the topic of MSTs: an optimal algorithm has been given by Pettie and Ramachandran, but we don't know the running time!
  • Lots of randomized algorithms have interested analyses. I'll only mention one example: Delaunay triangulation can be computed in expected O(n log n) time by incrementally adding points; the analysis is apparently intricate, though I haven't seen it.
  • Algorithms that use "bit tricks" can be neat, e.g. sorting in O(n log log n) time (and linear space) -- that's right, it breaks the O(n log n) barrier by using more than just comparisons.
  • Cache-oblivious algorithms often have interesting analyses. For example, cache-oblivious priority queues (see page 3) use log log n levels of sizes n, n2/3, n4/9, and so on.
  • (Static) range-minimum queries on arrays are neat. The standard proof tests your limits with respect to reduction: range-minimum queries is reduced to least common ancestor in trees, which is in turn reduced to a range-minimum queries in a specific kind of arrays. The final step uses a cute trick, too.
like image 96
A. Rex Avatar answered Sep 22 '22 18:09

A. Rex


Ackermann's function.

like image 42
dirkgently Avatar answered Sep 20 '22 18:09

dirkgently