Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between O(m+n) and O(mn)?

I was trying to find the complexities of an algorithm via different approaches. Mathematically I came across one O(m+n) and another O(mn) approach. However I am unable to grasp or say visualize this. It's not like I look at them and get the "Ahh! That's what's going on" feeling! Can someone explain this using their own examples or any other tool?

like image 908
Dubby Avatar asked May 27 '14 18:05

Dubby


People also ask

What does O mn mean?

O(mn) for a m x n matrix means that you're doing constant work for each value of the matrix. O(n^2) means that, for each column, you're doing work that is O(# columns). Note this runtime increases trivially with # of rows.

Is O 1 and O n the same?

Is O(1) always Faster than O(log n)? O(1) means the running time of an algorithm is independent of the input size and is bounded by a constant 'c'. Whereas, O(log n) means when input size 'n' increases exponentially, our running time will increase linearly.

What is O NM time complexity?

The time needed to complete the operation is proportional to the square of the arrays's length. Note: If both array have the same size, the time complexity is O(N^2) If both array have a different size, the time complexity is O(N.M) (as in N times M, where N and M are the array sizes)

What does O log MN mean?

O(log N) basically means time goes up linearly while the n goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on. ​It is O(log n) when we do divide and conquer type of algorithms e.g binary search.


2 Answers

O(m+n) example:

for(int i = 0, i < m, i++)
    //code
for(int j = 0, j < n, j++)
    //code

m iterations of code happen. Then n iterations of code happens.

O(mn) example:

for(int i = 0, i < m, i++)
    for(int j = 0, j < n, j++)
        //code

For every iteration of m, we have n iterations of code. Imagine iterating over a non-square 2D array.

m and n do not necessarily equal the same value. If they did equal the same value, then for O(m+n):

O(m+n) => O(m+m) => O(2m) => O(m)

I'd recommend looking at this question/answer in order to understand that last transition.

And for O(mn):

O(mn) => O(mm) => O(m^2)
like image 152
But I'm Not A Wrapper Class Avatar answered Sep 28 '22 17:09

But I'm Not A Wrapper Class


My recommendation for finding intuition is thought experiments as follows:

First, realize that m and n are two different measurements of the input. They might be the lengths of two input streams, the lengths of sides of a matrix, or the counts of two different attributes of the same data structure, such as edge and node count of the same graph, or any similar measures.

The intuition is that big-O expresses a bound on the true run time (or some other aspect such as comparison count or space needed) of an algorithm in terms of a simple function - call that R(m, n) - multiplied by some arbitrary constant. We ignore the constant factors and think of all algorithms bounded by the same R as a family by calling their run times O( R(m, n) ).

Consequently, big O(m + n) says that the true run time is bounded by some function R(m,n) = C(m + n) for suitably big m and n. For the graph example, this says that the actual run time of the algorithm will be bounded by a multiple of the sum of the number of vertices and edges.

You can think of the bounding function as a graph in 3d with axes m, n, and R(m,n). Or you can think of charts:

R(m,n) = m + n
--------------
m=  1  2  3  4
n=1 1  2  3  4
  2 3  4  5  6
  3 4  5  6  7
  4 5  6  7  8

For R(m,n) = mn, you have

R(m,n) = mn
--------------
m=  1  2  3  4
n=1 1  2  3  4
  2 2  4  6  8
  3 3  6  9 12
  4 4  8 12 16

As a 3d graph, the first function is a plane and the second is a much faster-growing function at almost all points. This means that if m and n grow large enough, an O(mn) bound will ultimately be larger (corresponding to a potentially slower program) than an O(m+n) because the constants become insignificant.

For an example of the cost of rapid growth, suppose an O(m+n) algorithm has an extra constant factor of 3 in its runtime bound (making it potentially very slow on small inputs compared to both algorithms above):

R(m,n) = 3(m + n)
--------------
m=   1  2  3  4
n=1  3  9 12 15
  2  9 12 15 18
  3 12 15 18 21
  4 15 18 21 24

So the the O(m + n) looks like it's bound is less constrained than the O(mn) one in the chart above. But look at the case m=n=100. Here bound on the O(m + n) algorithm is 3(m + n) = 600. But the O(mn) algorithm with the small constant has bound mn = 10000. Clearly you want the first if m and n are large.

@Anonymous raised a fine point on the initial version of this article, which confused big-O and big-Theta. Big-O only deals with bounds or upper limits on the quantity being measured. For example, this means that every O(n) algorithm is also O(n log n) and O(n^2). If the real run time is bounded by the slower-growing function, it is also bounded by all faster-growing ones.

Yet it is quite common for people to say "this algorithms is O(n)" while meaning that the bound is tight. That is, that Cn is an upper bound on the run time for some constant C and Dn is also a lower bound for some other constant D (and suitably large n). Such a tight bound is properly stated as Theta(n), not O(n). The run time of a Theta(R(m, n)) algorithm is (roughly speaking) proportional to R(m, n).

I'll add finally that there are many cases where you can't ignore constants. There exist lots of algorithms in the literature that are asymptotically "faster" than those in common use, but have constants so large that for practical problem sizes they are always too slow. Computational geometry has many examples. Radix 2 sort is another. It's Theta(n), but in practice a good quicksort (Theta(n log n) average case) will beat it on arrays of size up to at least 10^8 integers.

like image 42
Gene Avatar answered Sep 28 '22 18:09

Gene