I know of the Bubble-sort, Insertion-sort etc. but there are more efficient algorithms for sorting. By the Time Hierarchy Theorem, there are problems that can be solved in O(n^2) but not in O(n^r) for any real r < 2. The constructions used for it's proof are not very natural. What is a good example of a problem whose most efficient solution requires quadratic time?
I am looking for something that has preferably the following qualities:
Small caveat - The output should not be large. (If you want the sum of every pair of integers from a given list, it obviously requires quadratic time to output them). You can assume that it should be a decision problem, i.e. one with an yes-no answer. Also let us assume the time complexity O(n^2) is a function of input size, i.e. n is the number of bits required to represent the input.
Quadratic Time Complexity represents an algorithm whose performance is directly proportional to the squared size of the input data set (think of Linear, but squared). Within our programs, this time complexity will occur whenever we nest over multiple iterations within the data sets.
Nested for loops are quadratic time, because you're running a linear operation within another linear operation (or n*n = n²).
If an algorithm's time complexity is quadratic, it means that the runtime of the algorithm is directly proportional to the square of the size of the input. (
quadratic time functions are WAY WAY WAY slower than linear time functions. sometimes you can make a quadratic algorithm into a linear algorithm by using a hashmap. this is because hashmaps lookups are very fast (instant!)
Matrix multiplication has a theoretical lower bound of Ω(n2), since all n2 entries need to be processed. The best known algorithm to date (according to the above-linked Wikipedia article) has complexity O(n2.3727). The naive algorithm has complexity n3.
According to the Wikipedia article on Computational complexity of mathematical operations, back-substitution of a triangular matrix to obtain n solutions is O(n2). There are probably many other examples around on the web.
EDIT: A 2014 paper by Michele Borassi, et al., discusses a number of decision problems (output size O(1)) that can be solved in O(n2) time but not in O(n2-ε) for any ε > 0. (Of course, as always, these results depends on P ≠ NP, or, more precisely, that the Strong Exponential Time Hypothesis is true.)
Their paper leads off with a modified k-SAT problem:
Input:
Output: true
if there is an evaluation of all variables that satisfies all clauses, False
otherwise.
Note that the unmodified k-SAT problem (where the input does not include the third bullet above) is NP-complete, so normally one thinks of it as an exponential-time problem. However, here the input size is itself exponential in the number of variables. They show that, with this modification, the problem can always be solved in quadratic time (simply try all possible evaluations). More to the point for this answer, they also show that this is the minimum time complexity for any algorithm that solves the problem.
One might object that this modified k-SAT problem is hardy natural. However, they then use this to show that a number of other problems, which do seem natural, also cannot be solved in less that O(n2) time. The simplest one to state is the subset graph problem:
Input: a set X and a collection C of subsets of X.
Output: the graph G = (C, E), where, for each C, C′ ∈ C, (C, C′) ∈ E if and only if C ⊆ C′.
You are making a fundamental mistake of mixing computational models.
The time heirarchy theorem is about Turing machines while most of the other bounds have either their own models (like comparison model for sorting) or are usually about the RAM model.
So the real question is, which computational model are you talking about?
Also, talking about a best algorithm being no worse than O(n^2) (BigOh) is nonsense. BigOH is an upper bound. You probably meant to use Theta.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With