I read in a book that the following expression O(2^n + n^100)
will be reduced to: O(2^n)
when we drop the insignificant parts. I am confused because as per my understanding if the value of n
is 3
then the part n^100
seems to have a higher count of executions. What am I missing?
So, O(N*log(N)) is far better than O(N^2) . It is much closer to O(N) than to O(N^2) . But your O(N^2) algorithm is faster for N < 100 in real life.
O(2n) denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an O(2n) function is exponential - starting off very shallow, then rising meteorically.
Big O notation ranks an algorithms' efficiency Same goes for the “6” in 6n^4, actually. Therefore, this function would have an order growth rate, or a “big O” rating, of O(n^4) . When looking at many of the most commonly used sorting algorithms, the rating of O(n log n) in general is the best that can be achieved.
When comparing Big-Oh notations, you ignore all constants: N^2 has a higher growth rate than N*log(N) which still grows more quickly than O(1) [constant]. The power of N determines the growth rate. because the power of N 3 > 2 .
Big O notation is asymptotic in nature, that means we consider the expression as n tends to infinity.
You are right that for n = 3, n^100
is greater than 2^n
but once n > 1000, 2^n
is always greater than n^100
so we can disregard n^100
in O(2^n + n^100)
for n much greater than 1000.
For a formal mathematical description of Big O notation the wikipedia article does a good job
For a less mathematical description this answer also does a good job:
What is a plain English explanation of "Big O" notation?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With