A description of a function in terms of big O notation only provides an upper bound on the growth rate of the function. However, we generally seek to provide the tightest possible bound. If you say an algorithm is O(n3), but it is also O(n2), it is generally best to say O(n2).
O(2n) is actually C*2n, where C is an arbitrary chosen positive constant. Likewise, O(n3) is D*n3, where D is another arbitrary chosen positive constant. The claim "O(n3) is more efficient than O(2n)" means that, given any fixed C and D, it is always possible to find such n0 that for any n >= n0, D*n3 < C*2n.
The answer is no, for several reason: The expression O(f(n)) means that the running time is at most C f(n) with C a constant whose value is left undefined. It is well possible that the constant for O(log n) is much much larger than the one for O(n).
So, O(N*log(N)) is far better than O(N^2) . It is much closer to O(N) than to O(N^2) . But your O(N^2) algorithm is faster for N < 100 in real life. There are a lot of reasons why it can be faster.
I can think of the following three reasons:
Probably the #1 reason: because the O(N2) algorithm has enough higher constants that for the size of task being contemplated, the O(N3) version is faster.
Here is are examples to convince you that O(N³) can be in some cases better than O(N²).
O(N²) algorithm is very complex to code whereas if input size is say N ≤ 100 then for practical use O(N³) can be fast enough
O(N²) has a large constant multiplied to it for example c = 1000 hence for N = 100, c⋅N² = 1000⋅100² = 10⁷ whereas if c = 1 for O(N³) then c⋅N³ = 10⁶
O(N²) algorithm has very high space complexity as compared to O(N³)
another thing is, some algorithms have a big constant factor.
a O (N^2)
might have a big constant factor that won't make it really practical to use ( if N
is small enough as kindly noted by Thorban)
The only reason for choosing one algorithm is not the order-of-growth of the running time. You must analyze:
Adding to the already posted answers I'd like to mention cache behaviour. A particular memory access pattern might be so much slower due to repeated cache misses that a theoretically slower algorithm with a more cache friendly memory access pattern performs much better.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With