Two algorithms say A and B are written to solve the same problem. Algorithm A is O(n). Algorithm B is (n^2). You expect algorithm A to work better.
However when you run a specific example of the same machine, Algorithm B runs quicker. Give the reasons to explain how such a thing happen?
For the input of size n , an algorithm of O(n) will perform steps perportional to n , while another algorithm of O(log(n)) will perform steps roughly log(n) . Clearly log(n) is smaller than n hence algorithm of complexity O(log(n)) is better. Since it will be much faster.
Originally Answered: Is it true that O(n^2) always takes longer to run than O(log n)? No. The other answers have already elaborated on why this is the case.
So, O(N*log(N)) is far better than O(N^2) . It is much closer to O(N) than to O(N^2) . But your O(N^2) algorithm is faster for N < 100 in real life.
O(n) is faster than O(n^2), big oh is used based on worst case scenario.
Algorithm A, for example, can run in time 10000000*n
which is O(n)
.
If algorithm B, is running in n*n
which is O(n^2)
, A will be slower for every n < 10000000
.
O(n), O(n^2)
are asymptotic runtimes that describe the behavior when n->infinity
EDIT - EXAMPLE
Suppose you have the two following functions:
boolean flag;
void algoA(int n) {
for (int i = 0; i < n; i++)
for (int j = 0; j < 1000000; j++)
flag = !flag;
void algoB(int n) {
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
flag = !flag;
algoA
has n*1000000
flag flip operations so it is O(n)
whereas algoB
has n^2
flag flip operations so it is O(n^2)
.
Just solve the inequality 1000000n > n^2
and you'll get that for n < 1000000
it holds. That is, the O(n)
method will be slower.
While all of the answers so far seem correct... none of them feel really "right" in the context of a CS class. In a computational complexity course you want to be precise and use definitions. I'll outline a lot of the nuances of this question and of computational complexity in general. By the end, we'll conclude why Itay's solution at the top is probably what you should've written. My main issue with Itay's solution is that it lacks definitions which are key to writing a good proof for a CS class. Note that my definitions may differ slightly from your class' so feel free to substitute in whatever you want.
When we say "an algorithm is O(n)
" we actually mean "this algorithm is in the set O(n)
". And the set O(n)
contains all algorithms whose worst-case asymptotic complexity f(n)
has the property that f(n) <= c*n + c_0
for some constant c
and c_0
where c, c_0 > 0
.
Now we want to prove your claim. First of all, the way you stated the problem, it has a trivial solution. That's because our asymptotic bounds are "worst-case". For many "slow" algorithms there is some input that it runs remarkably quickly. For instance, insertion sort is linear if the input is already sorted! So take insertion sort (O(n)
) and merge sort (O(nlog(n))
) and notice that the insertion sort will run faster if you pass in a sorted array! Boom, proof done.
But I am assuming that your exam meant something more like "show why a linear algorithm might run faster than a quadratic algorithm in the worst-case." As Alex noted above, this is an open ended question. The crux of the issue is that runtime analysis makes assumptions that certain operations are O(1)
(e.g. for some problem you might assume that multiplication is O(1)
even though it becomes quadratically slower for large numbers (one might argue that the numbers for a given problem are bounded to be 100-bits so it's still "constant time")). Since your class is probably focusing specifically on computational complexity then they probably want you to gloss over this issue. So we'll prove the claim assuming that our O(1)
assumptions are right, and so there aren't details like "caching makes this algorithm way faster than the other one".
So now we have one algorithm which runs in f(n)
which is O(n)
and some other algorithm which runs in g(n)
which is O(n^2)
. We want to use the definitions above to show that for some n
we can have g(n) < f(n)
. The trick is that our assumptions have not fixed the c, c_0, c', c_0'
. As Itay mentions, we can choose values for those constants such that g(n) < f(n)
for many n
. And the rest of the proof is what he wrote above (e.g. let c, c_0
be the constants for f(n)
and say they are both 100 while c', c_0'
are the constants for g(n)
and they are both 1. Then g(n) < f(n) => n + 1 < 100n^2 + 100 => 100n^2 - n + 99 > 0 => (factor to get actual bounds for n)
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With