In big-O notation is O((log n)^k) = O(log n)
, where k
is some constant (e.g. the number of logarithmic for loops), true?
I was told by my professor that this statement was true, however he said it will be proved later in the course. I was wondering if any of you could demonstrate its validity or have a link where I could confirm if it is true.
Logarithmic time complexity log(n): Represented in Big O notation as O(log n), when an algorithm has O(log n) running time, it means that as the input size grows, the number of operations grows very slowly. Example: binary search.
3) k is a constant. Suppose one algorithm performs 1000*n operation for input of size n, so it is O(n).
Formal Definition: f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n. Also known as O, asymptotic upper bound.
Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better.
(1) It is true that O(log(n^k)) = O(log n).
(2) It is false that O(log^k(n)) (also written O((log n)^k)) = O(log n).
Observation: (1) has been proven by nmjohn.
Exercise: prove (2). (Hint: f(n) = log^2 n is O(log^2 n). Is it O(log n)? What is a sufficiently large constant c such that, for all n greater than n0, c log n > log^2 n?)
EDIT:
On a related note, anybody who finds this question helpful and/or interesting should go show some love for the new "Computer Science" StackExchange site. Here's a link. Go make this new place a reality!
http://area51.stackexchange.com/proposals/35636/computer-science-non-programming?referrer=rpnXA1_2BNYzXN85c5ibxQ2
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With