Possible Duplicate:
Big Theta Notation - what exactly does big Theta represent?
I understand it in theory, I guess, but what I'm having trouble grasping is the application of the three.
In school, we always used Big O to denote the complexity of an algorithm. Bubble sort was O(n^2) for example.
Now after reading some more theory I get that Big Oh is not the only measure, there's at least two other interesting ones.
But here's my question:
Big O is the upper-bound, Big Omega is the lower bound, and Big Theta is a mix of the two. But what does that mean conceptually? I understand what it means on a graph; I've seen a million examples of that. But what does it mean for algorithm complexity? How does an "upper bound" or a "lower bound" mix with that?
I guess I just don't get its application. I understand that if multiplied by some constant c that if after some value n_0 f(x) is greater than g(x), f(x) is considered O(g(x)). But what does that mean practically? Why would we be multiplying f(x) by some value c? Hell, I thought with Big O notation multiples didn't matter.
In very simple language difference would be like this: Big O notation is used for the worst case analysis of an algorithm. Big Omega is used for the best case analysis of an algorithm. Big Theta is used for the analysis of an algorithm when the the best case and worst case analysis is the same.
the only thing that changes is the value of c, if the value of c is an arbitrary value (a value that we choose to meet inequality), then Big Omega and Big O will be the same.
Big O notation provides an upper bound to a function whereas Big Theta provides a tight bound.
The big O notation, and its relatives, the big Theta, the big Omega, the small o and the small omega are ways of saying something about how a function behaves at a limit point (for example, when approaching infinity, but also when approaching 0, etc.) without saying much else about the function. They are commonly used to describe running space and time of algorithms, but can also be seen in other areas of mathematics regarding asymptotic behavior.
The semi-intuitive definition is as follows:
A function g(x) is said to be O(f(x)) if "from some point on", g(x) is lower than c*f(x), where c is some constant.
The other definitions are similar, Theta demanding that g(x) be between two constant multiples of f(x), Omega demanding g(x)>c*f(x), and the small versions demand that this is true for all such constants.
But why is it interesting to say, for example, that an algorithm has run time of O(n^2)?
It's interesting mainly because, in theoretical computer science, we are most interested in how algorithms behave for large inputs. This is true because on small inputs algorithm run times can vary greatly depending on implementation, compilation, hardware, and other such things that are not really interesting when analyzing an algorithm theoretically.
The rate of growth, however, usually depends on the nature of the algorithm, and to improve it you need deeper insights on the problem you're trying to solve. This is the case, for example, with sorting algorithms, where you can get a simple algorithm (Bubble Sort) to run in O(n^2), but to improve this to O(n log n) you need a truly new idea, such as that introduced in Merge Sort or Heap Sort.
On the other hand, if you have an algorithm that runs in exactly 5n seconds, and another that runs in 1000n seconds (which is the difference between a long yawn and a launch break for n=3, for example), when you get to n=1000000000000, the difference in scale seems less important. If you have an algorithm that takes O(log n), though, you'd have to wait log(1000000000000)=12 seconds, perhaps multiplied by some constant, instead of the almost 317,098 years, which, no matter how big the constant is, is a completely different scale.
I hope this makes things a little clearer. Good luck with your studies!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With