In my textbook I see the following:
Definition of the order of an algorithm
Algorithm A is order f(n) -- denoted O(f(n)) -- if constants k and n0 exist such that A requires no more than k * f(n) time units to solve a problem of size n >= n0.
I understand: Time requirements for different complexity classes grow at different rates. For instance, with increasing values of n, the time required for O(n) grows much more slowly than O(n2), which grows more slowly than O(n3), and so forth.
I do not understand: How k and n0 fit into this definition.
What is n0? Specifically, why does n have subscript 0, what does this subscript mean?
With question 1 answered, what does a 'a problem of size n >= n0' mean? A larger data set? More loop repetitions? A growing problem size?
What is k then? Why is k being multiplied by f(n)? What does k have to do with increasing the problem size - n?
I've already looked at:
Big Oh Notation - formal definition
Constants in the formal definition of Big O
What is an easy way for finding C and N when proving the Big-Oh of an Algorithm?
Confused on how to find c and k for big O notation if f(x) = x^2+2x+1
If you're referring to the point in the video where he says "This operation has a run time of big O(k), where k represents the number of elements in the list that we're adding to our existing list." then it's essentially the same thing as O(n), or linear complexity.
The deffinition of big O notation is: We say that f (x) is O(g(x)) if there are constants C and k such that |f (x)| ≤ C|g(x)| whenever x > k. Now here is the first example: EXAMPLE 1 Show that f (x) = x^2 + 2x + 1 is O(x^2).
Big-O f(n) = O(g(n)) iff f(n) does not grow any faster than g(n). In other words, f(n) ≤ Cg(n) for some constant C > 0 and for all n ≥ k, where k ≥ 0 is a constant.
Time complexity of counting sort is O(n+k) where n is the size of array, and k is the maximum element in the array, not the number of distinct elements.
1) n > n0 - means that we agree that for small n A might need more than k*f(n) operations. Eg. bubble sort might be faster than quick sort or merge sort for very small inputs. Choice of 0 as a subscript is completely due to author preferences.
2) Larger input size.
3) k is a constant. Suppose one algorithm performs 1000*n operation for input of size n, so it is O(n). Another algorithm needs 5*n^2 operations for input of size n. That means for input of size 100, first algorithm needs 100,000 ops and the second one 50,000 ops. So, for input size about 100 you better choose the second one though it is quadratic, and the first one is linear. On the following picture you can see that n0 = 200, because only with n greater than 200 quadratic function becomes more expensive than linear (here i assume that k equals 1).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With