Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why do we ignore co-efficients in Big O notation?

Tags:

While searching for answers relating to "Big O" notation, I have seen many SO answers such as this, this, or this, but still I have not clearly understood some points.

Why do we ignore the co-efficients?

For example this answer says that the final complexity of 2N + 2 is O(N); we remove the leading co-efficient 2 and the final constant 2 as well.

Removing the final constant of 2 perhaps understandable. After all, N may be very large and so "forgetting" the final 2 may only change the grand total by a small percentage.

However I cannot clearly understand how removing the leading co-efficient does not make difference. If the leading 2 above became a 1 or a 3, the percentage change to the grand total would be large.

Similarly, apparently 2N^3 + 99N^2 + 500 is O(N^3). How do we ignore the 99N^2 along with the 500?

like image 539
swdeveloper Avatar asked Apr 29 '15 20:04

swdeveloper


People also ask

Under what situation can we ignore Big O notation?

We may ignore any powers of n inside of the logarithms. The set O(log n) is exactly the same as O(log(nc)). The logarithms differ only by a constant factor (since log(nc) = c log n) and thus the big O notation ignores that. Similarly, logs with different constant bases are equivalent.

Why do we ignore constants in time complexity?

The reason that we don't use constants with big O notation is because, theoretically they don't matter much. What we are calculating with time-complexity is the speed at which something will grow, having a constant on there will be completely irrelevant when a large enough input is used.

What are the two rules of calculating big O?

If we allow our function g(n) to be n², we can find a constant c = 1, and a N₀ = 0, and so long as N > N₀, N² will always be greater than N²/2-N/2. We can easily prove this by subtracting N²/2 from both functions, then we can easily see N²/2 > -N/2 to be true when N > 0.

Why do we use big O notation to compare algorithms?

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. In other words, it measures a function's time or space complexity. This means, we can know in advance how well an algorithm will perform in a specific situation.


1 Answers

The purpose of the Big-O notation is to find what is the dominant factor in the asymptotic behavior of a function as the value tends towards the infinity.

As we walk through the function domain, some factors become more important than others.

Imagine f(n) = n^3+n^2. As n goes to infinity, n^2 becomes less and less relevant when compared with n^3.

But that's just the intuition behind the definition. In practice we ignore some portions of the function because of the formal definition:

f(x) = O(g(x)) as x->infinity

if and only if there is a positive real M and a real x_0 such as

|f(x)| <= M|g(x)| for all x > x_0.

That's in wikipedia. What that actually means is that there is a point (after x_0) after which some multiple of g(x) dominates f(x). That definition acts like a loose upper bound on the value of f(x).

From that we can derive many other properties, like f(x)+K = O(f(x)), f(x^n+x^n-1)=O(x^n), etc. It's just a matter of using the definition to prove those.

In special, the intuition behind removing the coefficient (K*f(x) = O(f(x))) lies in what we try to measure with computational complexity. Ultimately it's all about time (or any resource, actually). But it's hard to know how much time each operation take. One algorithm may perform 2n operations and the other n, but the latter may have a large constant time associated with it. So, for this purpose, isn't easy to reason about the difference between n and 2n.

like image 101
Juan Lopes Avatar answered Sep 19 '22 05:09

Juan Lopes