Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Algorithmic task which requires quadratic time?

I know of the Bubble-sort, Insertion-sort etc. but there are more efficient algorithms for sorting. By the Time Hierarchy Theorem, there are problems that can be solved in O(n^2) but not in O(n^r) for any real r < 2. The constructions used for it's proof are not very natural. What is a good example of a problem whose most efficient solution requires quadratic time?

I am looking for something that has preferably the following qualities:

  • It is simple and easy to understand
  • Something that is used frequently
  • It can be proved that O(n^2) is the best run-time for a correct solution

Small caveat - The output should not be large. (If you want the sum of every pair of integers from a given list, it obviously requires quadratic time to output them). You can assume that it should be a decision problem, i.e. one with an yes-no answer. Also let us assume the time complexity O(n^2) is a function of input size, i.e. n is the number of bits required to represent the input.

like image 227
akashnil Avatar asked Dec 16 '12 06:12

akashnil


People also ask

Which is a quadratic time algorithm?

Quadratic Time Complexity represents an algorithm whose performance is directly proportional to the squared size of the input data set (think of Linear, but squared). Within our programs, this time complexity will occur whenever we nest over multiple iterations within the data sets.

Which loop runs on quadratic time?

Nested for loops are quadratic time, because you're running a linear operation within another linear operation (or n*n = n²).

What does it mean to say that an algorithm has quadratic worst case run time?

If an algorithm's time complexity is quadratic, it means that the runtime of the algorithm is directly proportional to the square of the size of the input. (

Is linear time faster than quadratic time?

quadratic time functions are WAY WAY WAY slower than linear time functions. sometimes you can make a quadratic algorithm into a linear algorithm by using a hashmap. this is because hashmaps lookups are very fast (instant!)


2 Answers

Matrix multiplication has a theoretical lower bound of Ω(n2), since all n2 entries need to be processed. The best known algorithm to date (according to the above-linked Wikipedia article) has complexity O(n2.3727). The naive algorithm has complexity n3.

According to the Wikipedia article on Computational complexity of mathematical operations, back-substitution of a triangular matrix to obtain n solutions is O(n2). There are probably many other examples around on the web.

EDIT: A 2014 paper by Michele Borassi, et al., discusses a number of decision problems (output size O(1)) that can be solved in O(n2) time but not in O(n2-ε) for any ε > 0. (Of course, as always, these results depends on P ≠ NP, or, more precisely, that the Strong Exponential Time Hypothesis is true.)

Their paper leads off with a modified k-SAT problem:

Input:

  • two sets of variables X = {xi}, Y = {yj} of the same size;
  • a set C of clauses over these variables, such that each clause has at most size k; and
  • the two power sets ℘(X), ℘(Y) of X and Y (used to change the input size).

Output: true if there is an evaluation of all variables that satisfies all clauses, False otherwise.

Note that the unmodified k-SAT problem (where the input does not include the third bullet above) is NP-complete, so normally one thinks of it as an exponential-time problem. However, here the input size is itself exponential in the number of variables. They show that, with this modification, the problem can always be solved in quadratic time (simply try all possible evaluations). More to the point for this answer, they also show that this is the minimum time complexity for any algorithm that solves the problem.

One might object that this modified k-SAT problem is hardy natural. However, they then use this to show that a number of other problems, which do seem natural, also cannot be solved in less that O(n2) time. The simplest one to state is the subset graph problem:

Input: a set X and a collection C of subsets of X.

Output: the graph G = (C, E), where, for each C, C′ ∈ C, (C, C′) ∈ E if and only if CC′.

like image 156
Ted Hopp Avatar answered Oct 12 '22 12:10

Ted Hopp


You are making a fundamental mistake of mixing computational models.

The time heirarchy theorem is about Turing machines while most of the other bounds have either their own models (like comparison model for sorting) or are usually about the RAM model.

So the real question is, which computational model are you talking about?

Also, talking about a best algorithm being no worse than O(n^2) (BigOh) is nonsense. BigOH is an upper bound. You probably meant to use Theta.

like image 24
TexMan Avatar answered Oct 12 '22 13:10

TexMan