Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Time and Space Complexity of an Algorithm - Big O Notation

I am trying to analyze the Big-O-Notation of a simple algorithm and it has been a while I've worked with it. So I've come with an analysis and trying to figure out if this is correct one according to rules for the following code:

public int Add()
{
  int total = 0; //Step 1

  foreach(var item in list) //Step 2
  {
    if(item.value == 1) //Step 3
    {
      total += 1; //Step 4
    }
  }
  return total;
}
  1. If you assign a variable or set, in this case the complexity is determined according to the rules of Big O is O(1). So the first phase will be O(1) - This means whatever the input size is, the program will execute for the same time and memory space.

  2. The second step comes up with foreach loop. One thing is pretty clear in the loop. According to the input, the loop iterates or runs. As an example, for input 10, loop iterates 10 times and for 20, 20 times. Totally depends on the input. In accordance with the rules of the Big O, the complexity would be O(n) - n is the number of inputs. So in the above code, the loop iterates depending upon the number of items in the list.

  3. In this step, we define a variable that determines a condition check (See Step 3 in the coding). In that case, the complexity is O(1) according to the Big O rule.

  4. In the same way, in step 4, there is also no change (See Step 4 in the coding). If the condition check is true, then total variable increments a value by 1. So we write - complexity O(1).

So if the above calculations are perfect, then the final complexity stands as the following:

O(1) + O(n) + O(1) + O(1) or (O(1) + O(n) * O(1) + O(1))

I am not sure if this is correct. But I guess, I would expect some clarification on this if this isn't the perfect one. Thanks.

like image 974
AT-2017 Avatar asked Dec 25 '16 13:12

AT-2017


People also ask

What is the time and space complexity of Big O notation?

Different notations are used to describe the limiting behavior of a function, but since the worst case is taken so big-O notation will be used to represent the time complexity. Hence, the time complexity is O(N2) for the above algorithm.

What is the time complexity of the algorithm expressed in Big O notation?

Using the Big O Notation, the time complexity for the algorithm discussed above can be expressed as O(n), which simply means that the time complexity to calculate the total sale amount is linearly dependent on the number of sale records.


2 Answers

Big O notation to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines

For example, when analyzing some algorithm, one might find that the time (or the number of steps) it takes to complete a problem of size n is given by

T(n) = 4 n^2 - 2 n + 2

If we ignore constants (which makes sense because those depend on the particular hardware the program is run on) and slower growing terms, we could say "T(n)" grows at the order of n^2 " and write:T(n) = O(n^2)

For the formal definition, suppose f(x) and g(x) are two functions defined on some subset of the real numbers. We write

f(x) = O(g(x))

(or f(x) = O(g(x)) for x -> infinity to be more precise) if and only if there exist constants N and C such that

|f(x)| <= C|g(x)| for all x>N

Intuitively, this means that f does not grow faster than g

If a is some real number, we write

f(x) = O(g(x)) for x->a

if and only if there exist constants d > 0 and C such that

|f(x)| <= C|g(x)| for all x with |x-a| < d

So for your case it would be

O(n) as |f(x)| > C|g(x)|

Reference from http://web.mit.edu/16.070/www/lecture/big_o.pdf

int total = 0;
for (int i = n; i < n - 1; i++) { // --> n loop
    for (int j = 0; j < n; j++) { // --> n loop
        total = total + 1; // -- 1 time 
    }
}

}

Big O Notation gives an assumption when value is very big outer loop will run n times and inner loop is running n times

Assume n -> 100 than total n^2 10000 run times

like image 161
SarthAk Avatar answered Nov 15 '22 07:11

SarthAk


Your analysis is not exactly correct.

  1. Step 1 indeed takes O(1) operations
  2. Step 2 indeed takes O(n) operations
  3. Step 3 takes O(1) operations, but it is executed n times, so its whole contribution to complexity is O(1*n)=O(n)
  4. Step 4 takes O(1) operations, but it is executed up to n times, so its whole contribution to complexity is also O(1*n)=O(n)

The whole complexity is O(1)+O(n)+O(n)+O(n) = O(n).

like image 39
user31264 Avatar answered Nov 15 '22 08:11

user31264