Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to determine memory and time complexity of an algorithm?

I am not good at determining time and memory complexities and would appreciate it if someone could help me out.

I have an algorithm, here and I am not sure what its time and memory complexities would be.

Function sample(k)
   IF k < 2
       Return 0
   Return 1 + sample(k/2)

What is its time and memory complexity and why?

Thanks

like image 674
user3138212 Avatar asked Dec 27 '13 02:12

user3138212


People also ask

How do you find the time complexity of an algorithm?

For any loop, we find out the runtime of the block inside them and multiply it by the number of times the program will repeat the loop. All loops that grow proportionally to the input size have a linear time complexity O(n) . If you loop through only half of the array, that's still O(n) .

What are the time complexity and the memory complexity of an algorithm?

Time complexity is the time taken by the algorithm to execute each set of instructions. It is always better to select the most efficient algorithm when a simple problem can solve with different methods. Space complexity is usually referred to as the amount of memory consumed by the algorithm.

How do you analyze time and space complexity of an algorithm?

Algorithm ComplexityTime Factor − The time is calculated or measured by counting the number of key operations such as comparisons in sorting algorithm. Space Factor − The space is calculated or measured by counting the maximum memory space required by the algorithm.


1 Answers

Determining time and memory complexities amounts to counting how much of these two resources are used when running the algorithm, and seeing how these amounts change as the input size (k in this case) changes.

Time is going to be determined by how many times each of the instructions are evaluated, and space will be determined by how large the data structures involved need to get to compute the solution.

In this particular scenario, you're looking at a recursive algorithm, so basically this involves counting 1) how many recursive calls are made and 2) how much work is done for each of these calls.

Since the input is halved with each call, the sequence of calls will look something like this:

sample(k)       )
 sample(k/2)    )
  sample(k/4)   ) 
     ...        ) - n = number of calls 
    sample(4)   )
     sample(2)  )
      sample(1) )

Halving at each recursive call in this way will lead to a logarithmic number of calls.

n = log(k)

At each call we are storing a constant number of variables in the call stack, and doing constant amount of work (operations). This stems from the fact that the number of variables and the number of comparisons/additions/divisions in each call does not get bigger with bigger k.

The total time complexity is the number of calls multiplied by the amount of work done in each call, thus

time complexity = A*log(k) + B

For some constants A and B which reflect the actual time cost of doing a recursive call and doing comparisons/divisions respectively. Similarly:

space complexity = C*log(k) + D

For suitable constants C and D for space cost of recursion and variable storage respectively.

Now in this kind of analysis we care mostly about the asymptotic complexity, that is we don't really care about the constants because they reflect details about the machine which is running the algorithm, and we really want to know the shape of the curve (as k gets bigger). If you follow the rules of writing complexity using Big-Oh notation you'll arrive at the result:

space complexity = time complexity = O(log(k))

Edit: Memory complexity details

As I said before, memory complexity is determined by how large the data structures need to get to compute a solution, so you may ask: there are no data structures being used in this function, so where is the log(k) memory going?

The short answer: You have to store log(k) different values of the parameter k, one for each recursive call.

The detailed answer: there is an implicit data structure being used here by the mechanism of function calling (which we exploit via recursion) and its name is the call stack. Each time sample(k) is called, a new stack frame is created and a number of values are pushed onto the stack: the local value of the parameter k, the return address, and other implementation dependent things. In this way each recursive call forms a 'layer' on the stack where its local information is stored. The whole picture ends up looking something like this:

----------- < top of stack
|  k = 1  |
|   ...   | < stack frame for sample(1)
|---------|
|  k = 2  |
|   ...   | < stack frame for sample(2)
|---------|

    ...

|---------|
| k = p/2 |
|   ...   | < stack frame for sample(p/2)
|---------|
|  k = p  |
|   ...   | < stack frame for sample(p)
|---------|
|         | < stack frame for main() or whatever 
              initially called sample(p) 
              (we don't count this one)

(I've distinguished here the initial parameter value p from the value of k at each recursive call to avoid confusion, hopefully)

Main thing to note is, as there are n = log(k) recursive calls, there are n such stack frames. Each stack frame has constant size, and thus the space complexity is O(log(k)).

like image 114
Ezequiel Muns Avatar answered Oct 29 '22 02:10

Ezequiel Muns