I've been having some problems trying to grasp the concept of big O notation. So, by definition big O is as follows, T(n) ∈ O(G(n)) if T(n) <= G(n) * C
.
Since the the constant "C" can be any integer > 0, wouldn't this following example be true as well?
Example:
n log n ∈ O(log n)
n log n <= log n * c
Where C is equal to the value of n.
I know that the answer is that n log n ∉ O(log n)
but I don't understand how since C can be any constant.
Thanks in advance for your help :D
Big O can get very complex, and it can be hard to conceptualize. The few things I was able to learn so far are really just the tip of the iceberg. However, once you understand why and how it works, it's easier to understand some of the more complex ideas.
With Big O notation, we use the size of the input, which we call " n." So we can say things like the runtime grows "on the order of the size of the input" ( O ( n ) O(n) O(n)) or "on the order of the square of the size of the input" ( O ( n 2 ) O(n^2) O(n2)).
Big-O notation is the language we use for talking about how long an algorithm takes to run (time complexity) or how much memory is used by an algorithm (space complexity). Big-O notation can express the best, worst, and average-case running time of an algorithm.
c is just that, a constant. This means that you can't say "let c be the value of n", because you must choose some c first and then allow the comparison to hold for all n.
In other words, in order for some T(n) to be O(G(n)), there must exist no constant c such that G(n)*c is greater than T(n) for all n.
Thus n log n is not O(log n) because, no matter what constant you choose, n > c will cause n log n to be greater than c log n.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With