Steven Skiena's The Algorithm design manual's chapter 1 exercise has this question:
Let P be a problem. The worst-case time complexity of P is O(n^2) . The worst-case time complexity of P is also Ω(n log n) . Let A be an algorithm that solves P. Which subset of the following statements are consistent with this information about the complexity of P?
- A has worst-case time complexity O(n^2) .
- A has worst-case time complexity O(n^3/2).
- A has worst-case time complexity O(n).
- A has worst-case time complexity ⍬(n^2).
- A has worst-case time complexity ⍬(n^3) .
How can an algorithm have two worst-case time complexities?
Is the author trying to say that for some value of n
(say e.g. 300) upper bound for algorithm written for solving P
is of the order of O(n^2) while for another value of n
(say e.g. 3000) the same algorithm worst case was Ω(n log n)?
Insertion, Quick and bubble all have worst case time complexity of O(n2).
The complexity of an algorithm can be divided into two types. The time complexity and the space complexity.
The worst case gives us an upper bound on performance. Analyzing an algorithm's worst case guarantees that it will never perform worse than what we determine. Therefore, we know that the other cases must perform at least as well.
The answer to your specific question
is the author trying to say that for some value of n (say e.g. 300) upper bound for algorithm written for solving P is of the order of O(n^2) while for another value of n (say e.g. 3000) the same algorithm worst case was Ω(n log n)?
is no. That is not how complexity functions work. :) We don't talk about different complexity classes for different values of n. The complexity refers to the entire algorithm, not to the algorithm at specific sizes. An algorithm has a single time complexity function T(n), which computes how many steps are required to carry out the computation for an input size of n.
In the problem, you are given two pieces of information:
The worst case complexity is O(n^2)
The worst case complexity is Ω(n log n)
All this means is that we can pick constants c1, c2, N1, and N2, such that, for our algorithm's function T(n), we have
T(n) ≤ c1*n^2 for all n ≥ N1
T(n) ≥ c2*n log n for all n ≥ N2
In other words, our T(n) is "asymptotically bounded below" by some constant time n log n and "asymptotically bounded above" by some constant times n^2. It can itself be anything "between" an n log n style function and an n^2 style function. It can even be n log n (since that is bounded above by n^2) or it can be n^2 (since that's bounded below by n log n. It can be something in between, like n(log n)(log n).
It's not so much that an algorithm has "multiple worst case complexities" in the sense it has different behaviors. What are you seeing is an upper bound and a lower bound! And these can, of course, be different.
Now it is possible that you have some "weird" function like this:
def p(n):
if n is even:
print n log n stars
else:
print n*2 stars
This crazy algorithm does have the bounds specified in the problem from the Skiena book. And it has no Θ complexity. That might have been what you were thinking about, but do note that it is not necessary for a complexity function to be this weird in order for us to say the upper and lower bounds differ. The thing to remember is that upper and lower bounds are not tight unless explicitly stated to be so.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With