This is my first course in data structures and every lecture / TA lecture , we talk about O(log(n))
. This is probably a dumb question but I'd appreciate if someone can explain to me exactly what does it mean !?
Yes constant time i.e. O(1) is better than linear time O(n) because the former is not depending on the input-size of the problem. The order is O(1) > O (logn) > O (n) > O (nlogn).
As we increase the input size 'n', O(1) will outperforms O(log n).
(Take log of both sides. Actually, it grows faster since logn! =Θ(nlogn).)
It means that the thing in question (usually running time) scales in a manner that is consistent with the logarithm of its input size.
Big-O notation doesn't mean an exact equation, but rather a bound. For instance, the output of the following functions is all O(n):
f(x) = 3x g(x) = 0.5x m(x) = x + 5
Because as you increase x, their outputs all increase linearly - if there's a 6:1 ratio between f(n)
and g(n)
, there will also be approximately a 6:1 ratio between f(10*n)
and g(10*n)
and so on.
As for whether O(n)
or O(log n)
is better, consider: if n = 1000
, then log n = 3
(for log-base-10). Which would you rather have your algorithm take to run: 1000 seconds, or 3 seconds?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With