Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to find time complexity of an algorithm

I have gone through Google and Stack Overflow search, but nowhere I was able to find a clear and straightforward explanation for how to calculate time complexity

What do I know already?

Say for a code as simple as the one below:

char h = 'y'; // This will be executed 1 time int abc = 0; // This will be executed 1 time 

Say for a loop like the one below:

for (int i = 0; i < N; i++) {     Console.Write('Hello World !'); } 
  • int i=0; This will be executed only once. The time is actually calculated to i=0 and not the declaration.
  • i < N; This will be executed N+1 times
  • i++ This will be executed N times

So the number of operations required by this loop are {1+(N+1)+N} = 2N+2. (But this still may be wrong, as I am not confident about my understanding.)

OK, so these small basic calculations I think I know, but in most cases I have seen the time complexity as O(N), O(n^2), O(log n), O(n!), and many others.

like image 523
Yasser Shaikh Avatar asked Jun 14 '12 11:06

Yasser Shaikh


People also ask

Why do we calculate time complexity?

To elaborate, Time complexity measures the time taken to execute each statement of code in an algorithm. If a statement is set to execute repeatedly then the number of times that statement gets executed is equal to N multiplied by the time required to run that function each time.

How is time complexity measured?

Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform.

What is complexity formula?

The time complexity, measured in the number of comparisons, then becomes T(n) = n - 1. In general, an elementary operation must have two properties: There can't be any other operations that are performed more frequently as the size of the input grows.


1 Answers

How to find time complexity of an algorithm

You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.

For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N).

Why do we remove the two 2s ?

We are interested in the performance of the algorithm as N becomes large.

Consider the two terms 2N and 2.

What is the relative influence of these two terms as N becomes large? Suppose N is a million.

Then the first term is 2 million and the second term is only 2.

For this reason, we drop all but the largest terms for large N.

So, now we have gone from 2N + 2 to 2N.

Traditionally, we are only interested in performance up to constant factors.

This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.

So 2N becomes just N.

like image 64
Andrew Tomazos Avatar answered Oct 16 '22 15:10

Andrew Tomazos