Is O(log(log(n)))
actually just O(log(n))
when it comes to time complexity?
Do you agree that this function g()
has a time complexity of O(log(log(n)))
?
int f(int n) {
if (n <= 1)
return 0;
return f(n/2) + 1;
}
int g(int n) {
int m = f(f(n));
int i;
int x = 0;
for (i = 0; i < m; i++) {
x += i * i;
}
return m;
}
Big-O means “is of the same order as”. The corresponding little-o means “is ul- timately smaller than”: f (n) = o(1) means that f (n)/c !
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.
These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement.
So, What is Big O? Big O notation is the language we use for talking about how long an algorithm takes to run (time complexity) or how much memory is used by an algorithm (space complexity). Big O notation can express the best, worst, and average-case running time of an algorithm.
function f(n)
computes the logarithm in base 2
of n
by repeatedly dividing by 2
. It iterates log2(n) times.
Calling it on its own result will indeed return log2(log2(n)) for an additional log2(log2(n)) iterations. So far the complexity is O(log(N)) + O(log(log(N)). The first term dominates the second, overall complexity is O(log(N)).
The final loop iterates log2(log2(n)) times, time complexity of this last phase is O(log(log(N)), negligible in front of the initial phase.
Note that since x
is not used before the end of function g
, computing it is not needed and the compiler may well optimize this loop to nothing.
Overall time complexity comes out as O(log(N)), which is not the same as O(log(log(N)).
Looks like it is log(n) + log(log n) + log(log n)
.
In order: the first recursion of f()
, plus the second recursion of f()
, and the for loop, so the final complexity is O(log n), because lower order terms are ignored.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With