Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Big O Notation and Time Complexity of C++ Code Snippet

Tags:

c++

big-o

So I am looking for confirmation of what the time complexity of the c++ code snippet is:

for(int i = 0; i<N, i++){
    for(int k = 1; k<N; k*=2){
    //code with O(1)
    }
 }

I am thinking this would be O(NlgN) where lg is log base 2. The inner loop would be O(lgN) since k is doubling after each iteration. The outer loop is clearly O(N), making the whole code:

 O(N)*O(lgN) = O(NlgN).
like image 797
Schooter Avatar asked Feb 22 '19 04:02

Schooter


People also ask

What is Big O notation and time complexity?

What is Big O? Big O, also known as Big O notation, represents an algorithm's worst-case complexity. It uses algebraic terms to describe the complexity of an algorithm. Big O defines the runtime required to execute an algorithm by identifying how the performance of your algorithm will change as the input size grows.

What is Big O notation in C?

Big O Notation is a tool used to describe the time complexity of algorithms. It calculates the time taken to run an algorithm as the input grows. In other words, it calculates the worst-case time complexity of an algorithm. Big O Notation in Data Structure describes the upper bound of an algorithm's runtime.

What is the time complexity of the following code snippet in C++?

O(N^2) times.


1 Answers

Yes it is in O(n log n) but the base does not matter in big O notation since f=n \cdot log_2(n) \in \mathcal{O}(log_2(n) * n ) \subseteq \mathcal{O}(\frac{ln(n)}{ln(2)} * n ) \subseteq \mathcal{O}(log(n) * n ) \ni f = n \cdot ln (n) i.e.

enter image description here

Note the log at the end should still be ln, but people are not caring about the confusion to whenever log is to the base of 10 or e since it does not matter in big O.

So even for(int k = 2; k<N; k*= k) would be the same in complexity when using big O notation. However sometimes people are writing down the constant factors when comparing very minor optimisations, but thats not feasible unless you are talking about the quick sort implementation that runs on billions of instances around the world.

For the part how we can be sure that your inner loop is in deed bound by log(n) I kind of also did not find a nice math proof. Of course executing it is kind of a proof, but my theoretical approach is that, we can agree the inner loop executes as often as your function k *= 2 needs a bigger argument to reach n, so where k(x) >= n, and what do we know which x do we need to get the k(x) we want, is the inverse function k^(-1), and the inverse functions for 2^x is log_2(x).

like image 193
Superlokkus Avatar answered Oct 01 '22 19:10

Superlokkus