I am learning algorithm analysis. I am having trouble understanding the difference between O, Ω, and Θ.
The way they're defined is as follows:
f(n) = O(g(n))
meansc · g(n)
is an upper bound onf(n)
. Thus there exists some constantc
such thatf(n)
is always ≤c · g(n)
, for large enoughn
(i.e.,n ≥ n0
for some constantn0
).f(n) = Ω(g(n))
meansc · g(n)
is a lower bound onf(n)
. Thus there exists some constantc
such thatf(n)
is always ≥c · g(n)
, for alln ≥ n0
.f(n) = Θ(g(n))
meansc1 · g(n)
is an upper bound onf(n)
andc2 · g(n)
is a lower bound onf(n)
, for alln ≥ n0
. Thus there exist constantsc1
andc2
such thatf(n) ≤ c1 ·g(n)
andf(n) ≥ c2 ·g(n)
. This means thatg(n)
provides a nice, tight bound onf(n)
.
The way I have understood this is:
O(f(n))
gives worst case complexity of given function/algorithm. Ω(f(n))
gives best case complexity of given function/algorithm. Θ(f(n))
gives average case complexity of given function/algorithm. Please correct me if I am wrong. If it is the case, time complexity of each algorithm must be expressed in all three notations. But I observed that it's expressed as either O, Ω, or Θ; why not all three?
Big Omega (Ω) – Lower Bound. Big Theta (Θ) – Tight Bound. 4. It is define as upper bound and upper bound on an algorithm is the most amount of time required ( the worst case performance).
Big-O is an upper bound. Big-Theta is a tight bound, i.e. upper and lower bound. When people only worry about what's the worst that can happen, big-O is sufficient; i.e. it says that "it can't get much worse than this". The tighter the bound the better, of course, but a tight bound isn't always easy to compute.
Theta Notation (Θ-notation) Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm. Theta bounds the function within constants factors.
It is important to remember that the notation, whether O, Ω or Θ, expresses the asymptotic growth of a function; it does not have anything intrinsically to do with algorithms per se. The function in question may be the "complexity" (running time) of an algorithm, either worst-case, best-case or average-case, but the notation is independent of where the function comes from.
For example, the function f(n)=3n2+5 is:
Now, usually the function considered is the worst-case complexity of an algorithm, and which notation of the three is used depends on what we want to say about it and on how carefully we do the analysis. For example, we may observe that because there are two nested loops, the worst-case running time is at most O(n2), without caring about whether this is actually achieved for some input. (Usually it is obvious that it is.) Or, we may say that the worst-case running time of sorting is Ω(n log n), because there must be some inputs for which it must take at least cn(log n) steps. Or, we may look at a particular mergesort algorithm, and see that it takes at most O(n log n) steps in the worst-case and that some input makes it take n log n steps, so the worst-case running time is Θ(n log n).
Note that in all the three examples above, it was still the same (worst-case) running time that was being analyzed. We may analyze the best-case or average-case instead, but again, which notation of the three we use depends on what we want to say — whether we want to give an upper bound, lower bound, or tight bound on the order of growth of the same function.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With