Talking about Big O notations, if one algorithm time complexity is O(N) and other's is O(2N), which one is faster?
Which is more complex, 2^n and n^100^10^1000? By "more complex," I will assume you mean "grows faster," as in its "computational complexity." Short answer: 2^n.
The constants in front of your dominant term might seem more precise, but they are also superfluous in Big O notation. For example, in O(2n) the constant 2 can be dropped. In proper Big O Notation, O(2n) is written as O(n). In fact, O(2n) and O(3n) can both be simplified to O(n).
4. O(2n) An example of an O(2n) function is the recursive calculation of Fibonacci numbers.
The fastest possible running time for any algorithm is O(1), commonly referred to as Constant Running Time. In this case, the algorithm always takes the same amount of time to execute, regardless of the input size.
The definition of big O is:
O(f(n)) = { g | there exist N and c > 0 such that g(n) < c * f(n) for all n > N }
In English, O(f(n)) is the set of all functions that have an eventual growth rate less than or equal to that of f.
So O(n) = O(2n). Neither is "faster" than the other in terms of asymptotic complexity. They represent the same growth rates - namely, the "linear" growth rate.
Proof:
O(n) is a subset of O(2n): Let g be a function in O(n). Then there are N and c > 0 such that g(n) < c * n for all n > N. So g(n) < (c / 2) * 2n for all n > N. Thus g is in O(2n).
O(2n) is a subset of O(n): Let g be a function in O(2n). Then there are N and c > 0 such that g(n) < c * 2n for all n > N. So g(n) < 2c * n for all n > N. Thus g is in O(n).
Typically, when people refer to an asymptotic complexity ("big O"), they refer to the canonical forms. For example:
(Here's a fuller list: Table of common time complexities)
So usually you would write O(n), not O(2n); O(n log n), not O(3 n log n + 15 n + 5 log n).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With