Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Big Oh for (n log n) [closed]

I am currently studying basic algorithms for Big Oh. I was wondering if anyone can show me what the code for (n log n) in Java using Big Oh would be like or direct me to any SO page where one exists.

Since I am just a beginner, I can only imagine the code before I write it. So, theoretically (at least), it should contain one for loop where we have something of n times. Then for the log n, we can use the while loop. So then the loop is executed n times and the while loop is executed log base 2 times. At least that is how I am imagining it in my head but seeing the code would clear things up.

like image 885
hherklj kljkljklj Avatar asked Sep 26 '13 06:09

hherklj kljkljklj


People also ask

What is the big oh of log n?

Logarithmic time complexity log(n): Represented in Big O notation as O(log n), when an algorithm has O(log n) running time, it means that as the input size grows, the number of operations grows very slowly.

What is big O of log n 2?

O(log(n^2)) is simply O(2 log(n)) = O(log(n)) . It is a logarithmic function. Its value is much smaller than the linear function O(n) .

Which is bigger log n or n log n?

Usually the base is less than 4. So for higher values n, n*log(n) becomes greater than n.

What does O'n log n mean?

O(log N) basically means time goes up linearly while the n goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on. ​It is O(log n) when we do divide and conquer type of algorithms e.g binary search.

What is O (n) * O (log n)?

It turns out that this is O (log n). In fact, the base of log is 2, but in Big-O notation, we remove the base since it only adds factors to our log that we are not interested in. So, you are executing a loop n times, and within that loop, you are executing another loop log (n) times. So, you have O (n) * O (log n) = O (n log n).

What is the base of log in Big-O notation?

In fact, the base of log is 2, but in Big-O notation, we remove the base since it only adds factors to our log that we are not interested in. So, you are executing a loop n times, and within that loop, you are executing another loop log (n) times.

What is Big O notation?

What is Big O? Big O notation is a system for measuring the rate of growth of an algorithm. Big O notation mathematically describes the complexity of an algorithm in terms of time and space. We don’t measure the speed of an algorithm in seconds (or minutes!).

How does Big O work?

How Does Big O Work? O Complexity O (n * log n) log linear O (n^2) quadratic O (n^3) cubic O (2^n) exponential 4 more rows ...


3 Answers

int n = 100
for(int i = 0; i < n; i++) //this loop is executed n times, so O(n)
{
    for(int j = n; j > 0; j/=2) //this loop is executed O(log n) times
    {

    }
}

Explanation: The outer for loop should be clear; it is executed n times. Now to the inner loop. In the inner loop, you take n and always divide it by 2. So, you ask yourself: How many times can I divide n by 2?

It turns out that this is O (log n). In fact, the base of log is 2, but in Big-O notation, we remove the base since it only adds factors to our log that we are not interested in.

So, you are executing a loop n times, and within that loop, you are executing another loop log(n) times. So, you have O(n) * O(log n) = O(n log n).

like image 54
productioncoder Avatar answered Oct 25 '22 18:10

productioncoder


A very popular O(n log n) algorithm is merge sort. http://en.wikipedia.org/wiki/Merge_sort for example of the algorithm and pseudocode. The log n part of the algorithm is achieved through breaking down the problem into smaller subproblems, in which the height of the recursion tree is log n.

A lot of sorting algortihms has the running time of O(n log n). Refer to http://en.wikipedia.org/wiki/Sorting_algorithm for more examples.

like image 37
rcs Avatar answered Oct 25 '22 18:10

rcs


Algorithms with a O(.) time complexity involving log n's typically involve some form of divide and conquer.

For example, in MergeSort the list is halved, each part is individually merge-sorted and then the two halves are merged together. Each list is halved.

Whenever you have work being halved or reduced in size by some fixed factor, you'll usually end up with a log n component of the O(.).

In terms of code, take a look at the algorithm for MergeSort. The important feature, of typical implementations, is that it is recursive (note that TopDownSplitMerge calls itself twice in the code given on Wikipedia).

All good standard sorting algorithms have O(n log n) time complexity and it's not possible to do better in the worst case, see Comparison Sort.

To see what this looks like in Java code, just search! Here's one example.

like image 29
Daniel Renshaw Avatar answered Oct 25 '22 18:10

Daniel Renshaw