Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is n or nlog(n) better than constant or logarithmic time?

In the Princeton tutorial on Coursera the lecturer explains the common order-of-growth functions that are encountered. He says that linear and linearithmic running times are "what we strive" for and his reasoning was that as the input size increases so too does the running time. I think this is where he made a mistake because I have previously heard him refer to a linear order-of-growth as unsatisfactory for an efficient algorithm.

While he was speaking he also showed a chart that plotted the different running times - constant and logarithmic running times looked to be more efficient. So was this a mistake or is this true?

like image 673
template boy Avatar asked Sep 17 '14 19:09

template boy


People also ask

Which is better NLOG n or n?

No matter how two functions behave on small value of n , they are compared against each other when n is large enough. Theoretically, there is an N such that for each given n > N , then nlogn >= n . If you choose N=10 , nlogn is always greater than n .

Is constant time better than log n?

constant time is better than log(n) time in most cases. In edge cases where log(n) is smaller than the constant it will be faster (in the real world). Remember that one billion and 1 are both "constant" in big O notation.

Which time complexity is best?

1. O(1) has the least complexity. Often called “constant time”, if you can create an algorithm to solve the problem in O(1), you are probably at your best.

Is log n or Nlogn faster?

Yes for Binary search the time complexity in Log(n) not nlog(n). So it will be less than O(n). But N*Log(N) is greater than O(N).

Why is O (log n) faster than O (n)?

This can easily be proved through limit theory, because as n increases without limit, O (log n)/n converges to zero. Which means that given sufficiently large n, O (log n) will eventually be faster than O (n); moreover, as n increases beyond that, O (log n) gets even faster compared to O (n).

Why is constant time better than log time?

constant time is better than log (n) time in most cases. In edge cases where log (n) is smaller than the constant it will be faster (in the real world). Remember that one billion and 1 are both "constant" in big O notation.

Is there a better algorithm than logarithmic time?

He said those algorithms are what we strive for, which is generally true. Many algorithms cannot possibly be improved better than logarithmic or linear time, and while constant time would be better in a perfect world, it's often unattainable. Show activity on this post.

Which is faster quadratic or logarithmic time complexity?

So the quadratic algorithm might have 2*n^2 time complexity, whereas the logarithmic one has 8,251,663*n log n time complexity. If you are only interested in inputs that are a few thousand elements long, then the quadratic algorithm will be faster. Which time complexity is better, O (n*log n) or O (n*k)?


3 Answers

It is a mistake when taken in the context that O(n) and O(n log n) functions have better complexity than O(1) and O(log n) functions. When looking typical cases of complexity in big O notation:

O(1) < O(log n) < O(n) < O(n log n) < O(n^2)

Notice that this doesn't necessarily mean that they will always be better performance-wise - we could have an O(1) function that takes a long time to execute even though its complexity is unaffected by element count. Such a function would look better in big O notation than an O(log n) function, but could actually perform worse in practice.

Generally speaking: a function with lower complexity (in big O notation) will outperform a function with greater complexity (in big O notation) when n is sufficiently high.

like image 131
Conduit Avatar answered Nov 11 '22 05:11

Conduit


You're missing the broader context in which those statements must have been made. Different kinds of problems have different demands, and often even have theoretical lower bounds on how much work is absolutely necessary to solve them, no matter the means.

For operations like sorting or scanning every element of a simple collection, you can make a hard lower bound of the number of elements in the collection for those operations, because the output depends on every element of the input. [1] Thus, O(n) or O(n*log(n)) are the best one can do.

For other kinds of operations, like accessing a single element of a hash table or linked list, or searching in a sorted set, the algorithm needn't examine all of the input. In those settings, an O(n) operation would be dreadfully slow.

[1] Others will note that sorting by comparisons also has an n*log(n) lower bound, from information-theoretic arguments. There are non-comparison based sorting algorithms that can beat this, for some types of input.

like image 33
Phil Miller Avatar answered Nov 11 '22 04:11

Phil Miller


Generally speaking, what we strive for is the best we can manage to do. But depending on what we're doing, that might be O(1), O(log log N), O(log N), O(N), O(N log N), O(N2), O(N3), or (or certain algorithms) perhaps O(N!) or even O(2N).

Just for example, when you're dealing with searching in a sorted collection, binary search borders on trivial and gives O(log N) complexity. If the distribution of items in the collection is reasonably predictable, we can typically do even better--around O(log log N). Knowing that, an algorithm that was O(N) or O(N2) (for a couple of obvious examples) would probably be pretty disappointing.

On the other hand, sorting is generally quite a bit higher complexity--the "good" algorithms manage O(N log N), and the poorer ones are typically around O(N2). Therefore, for sorting an O(N) algorithm is actually very good (in fact, only possible for rather constrained types of inputs), and we can pretty much count on the fact that something like O(log log N) simply isn't possible.

Going even further, we'd be happy to manage a matrix multiplication in only O(N2) instead of the usual O(N3). We'd be ecstatic to get optimum, reproducible answers to the traveling salesman problem or subset sum problem in only O(N3), given that optimal solutions to these normally require O(N!).

like image 40
Jerry Coffin Avatar answered Nov 11 '22 05:11

Jerry Coffin