Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to solve Big-O Notation for prime number function?

I am trying to understand Big-O Notation. So sorry if I am asking something that is too obvious but I can't seem to wrap my head around this.

I have the following C# code function that I am trying to calculate Big-O Notation for.

for (i = 2; i < 100; i++)
     {
        for (j = 2; j <= (i / j); j++)
           if ((i % j) == 0) break; // if factor found, not prime
        if (j > (i / j)) 
           Console.WriteLine("{0} is prime", i);
     }

Now what I got so far is that I think that both the if clauses are considered constant O(1) and not taken into account for the calculation of this algorithm? And also if I have understood correctly a single for loop

for(i = 0; i < 100; i++)

since it is a linear function would be O(n) and a nested loop that does not depend on a variable from the surrounding loop

for(i = 0; i < 100; i++)
    for(j = 0; j < 100; j++)

Is O(n^2)? But how do I calculate a function such as the top one where the second loop is dependent on the first loop and creates a non-linear function?

Picture of my data points

I found a definition for linearithmic that says

Linearithmic algorithm scales to huge problems. Whenever N doubles, the running time more (but not much more) than doubles.

While this seems to be a good description of how this code snippet runs would that mean that it is O(N Log[n]) and if so how could I calculate this?

like image 819
Karl-Henrik Avatar asked Apr 29 '14 10:04

Karl-Henrik


People also ask

What is the big O complexity of the function?

Big O notation is used to describe the complexity of an algorithm when measuring its efficiency, which in this case means how well the algorithm scales with the size of the dataset.

How do you solve prime numbers in Python?

Example 1: Using a flag variable We check if num is exactly divisible by any number from 2 to num - 1 . If we find a factor in that range, the number is not prime, so we set flag to True and break out of the loop. Outside the loop, we check if flag is True or False . If it is True , num is not a prime number.


1 Answers

@Jon is close but his analysis is a bit wrong, and the real complexity of your algorithm is O(n*sqrt(n)).

This is based on the fact that for each number i, the expected number of 'work' you should do in the inner loop is:

1/2 + 2/3 + 3/4 + ... + (sqrt(i)-1)/sqrt(i) = 
 = 1-1/2 + 1-1/3 + ... + 1-1/sqrt(i)
 = sqrt(i) - (1/2 + 1/3 + ... + 1/sqrt(i)
 = sqrt(i) - H_sqrt(i)

since H_sqrt(i) (The harmonic number) is in O(log(sqrt(i)) = O(1/2*log(i), we can conclude that the complexity is O(sqrt(i)-log(i)) = O(sqrt(i)), per prime number calculation.

Since this is done repeatedly per each i, the total complexity of the problem is O(sqrt(2) + sqrt(3) + ... + sqrt(n)). According to this forum thread, the sum of squared roots is in O(n*sqrt(n)), which is "worse" than O(nlogn).

Things to notice:

  1. The first sum is up to sqrt(i) since this is the point where j > (i / j).
  2. The first sum is (j-1)/j for each j, because on average one out of j elements is getting into the break (1/3 of the elements are dividable by 3, 1/4 by 4,...) this leaves us (j-1)/j that are not - and this is the expected work we have.
  3. The equality O(log(sqrt(n)) = O(1/2*log(n) comes from O(log(n^k))=O(k*log(n))=O(log(n)) for any constant k. (in your case k=1/2)
like image 179
amit Avatar answered Oct 23 '22 09:10

amit