Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What does it mean to find big o notation for memory

Tags:

big-o

I have a question, what does it mean to find the big-o order of the memory required by an algorithm?

Like what's the difference between that and the big o operations?

E.g

a question asks Given the following pseudo-code, with an initialized two dimensional array A, with both dimensions of size n:

for  i <- 1  to  n  do
       for  j <- 1  to  n-i  do
                        A[i][j]=  i + j

Wouldn't the big o notation for memory just be n^2 and the computations also be n^2?

like image 767
Weadadada Awda Avatar asked Nov 12 '12 21:11

Weadadada Awda


People also ask

What Big O notation tells us?

Big O Notation is a way to measure an algorithm's efficiency. It measures the time it takes to run your function as the input grows. Or in other words, how well does the function scale. There are two parts to measuring efficiency — time complexity and space complexity.

What is Big O notation and why is it important in programming?

Big-O notation helps programmers to measure the scalability of an algorithm. It indicates the maximum number of operations taken by an algorithm for giving output based on how much data the program has to work on.

Why Big O notation is used in time and space complexity analysis?

Big O notation is used in computer science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used by an algorithm.


4 Answers

Big-Oh is about how something grows according to something else (technically the limit on how something grows). The most common introductory usage is for the something to be how fast an algorithm runs according to the size of inputs.

There is nothing that says you can't have the something be how much memory is used according to the size of the input.

In your example, since there is a bucket in the array for everything in i and j, the space requirements grow as O(i*j), which is O(n^2)

But if your algorithm was instead keeping track of the largest sum, and not the sums of every number in each array, the runtime complexity would still be O(n^2) while the space complexity would be constant, as the algorithm only ever needs to keep track of current i, current j, current max, and the max being tested.

like image 98
hvgotcodes Avatar answered Nov 14 '22 07:11

hvgotcodes


Big-O order of memory means how does the number of bytes needed to execute the algorithm vary as the number of elements processed increases. In your example, I think the Big-O order is n squared, because the data is stored in a square array of size nxn.

The big-O order of operations means how does the number of calculations needed to execute the algorithm vary as the number of elements processed increases.

like image 41
Don Kirkby Avatar answered Nov 14 '22 06:11

Don Kirkby


Yes you are correct the space and time complexity for the above pseudo code is n^2.

But for the below code the space or memory complexity is 1 and but time complexity is n^2. I usually go by the assignments etc done within the code which gives you the memory complexity.

for i <- 1 to n do

   for  j <- 1  to  n-i  do

                    A[0][0]=  i + j
like image 41
Isaiah4110 Avatar answered Nov 14 '22 06:11

Isaiah4110


I honestly never heard of "big O for memory" but I can easily guess it is only loosely relater to the computation time - probably only setting a lower bound.

As an example, it is easy to design an algorithm which uses n^2 memory and n^3 computation, but i think it is impossible to do the other way round - you cannot process n^2 data with n complexity computationally.

Your algorithm has complexity 1/2 * n^ 2, thus O(n^2)

like image 20
thedayofcondor Avatar answered Nov 14 '22 06:11

thedayofcondor