n log n > n
-- but this is like a pseudo-linear
relationship. If n=1 billion
, log n ~ 30;
So n log n
will be 30 billion
, which is 30 X n
, order of n
.
I am wondering if this time complexity difference between n log n and n
are significant in real life.
Eg: A quick select
on finding kth element in an unsorted array is O(n)
using quickselect algorithm.
If I sort the array and find the kth element, it is O(n log n)
. To sort an array with 1 trillion
elements, I will be 60 times
slower if I do quicksort
and index it
.
O(logn) means that the algorithm's maximum running time is proportional to the logarithm of the input size. O(n) means that the algorithm's maximum running time is proportional to the input size. basically, O(something) is an upper bound on the algorithm's number of instructions (atomic ones).
Usually the base is less than 4. So for higher values n, n*log(n) becomes greater than n. And that is why O(nlogn) > O(n).
The lower bound depends on the problem to be solved, not on the algorithm. Show activity on this post. Yes constant time i.e. O(1) is better than linear time O(n) because the former is not depending on the input-size of the problem. The order is O(1) > O (logn) > O (n) > O (nlogn).
Logarithmic time complexity log(n): Represented in Big O notation as O(log n), when an algorithm has O(log n) running time, it means that as the input size grows, the number of operations grows very slowly. Example: binary search.
The main purpose of the Big-O notation is to let you do the estimates like the ones you did in your post, and decide for yourself if spending your effort coding a typically more advanced algorithm is worth the additional CPU cycles that you are going to buy with that code improvement. Depending on the circumstances, you may get a different answer, even when your data set is relatively small:
Another thing the Big-O notation hides is the constant multiplication factor. For example, Quick Select has very reasonable multiplier, making the time savings from employing it on extremely large data sets well worth the trouble of implementing it.
Another thing that you need to keep in mind is the space complexity. Very often, algorithms with O(N*Log N)
time complexity would have an O(Log N)
space complexity. This may present a problem for extremely large data sets, for example when a recursive function runs on a system with a limited stack capacity.
It depends.
I was working at amazon, there was a method, which was doing linear search on a list. We could use a Hashtable and do the look up in O(1) compared to O(n).
I suggested the change, and it wasn't approved. because the input was small, it wouldn't really make a huge difference.
However, if the input is large, then it would make a difference.
In another company, where the data/input was huge, using a Tree, Compared to List made a huge difference. So it depends on the data and architecture of the application.
It is always good to know your options and how you can optimize.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With