Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do you determine the average-case complexity of this algorithm?

It's usually easy to calculate the time complexity for the best case and the worst case, but when it comes to the average case especially when there's a probability p given, I don't know where to start.

Let's look at the following algorithm to compute the product of all the elements in a matrix:

int computeProduct(int[][] A, int m, int n) {
    int product = 1;
    for (int i = 0; i < m; i++ {
        for (int j = 0; j < n; j++) {
            if (A[i][j] == 0) return 0;
            product = product * A[i][j];
        }
    }
    return product;
}

Suppose p is the probability of A[i][j] being 0 (i.e. the algorithm terminates there, return 0); how do we derive the average case time complexity for this algorithm?

like image 932
lyming90 Avatar asked Nov 26 '17 16:11

lyming90


People also ask

How do you calculate average complexity?

Average case time complexity is the average time taken for algorithm to compute for all possible inputs. So = (n-1)n/2. Average time complexity = (n-1)/2 i.e O(n).

How do you find the average-case complexity of a binary search?

The dominant term is N * logN / (N+1) which is approximately logN. Therefore, Average Case Time Complexity of Binary Search is O(logN).

How do you calculate time complexity of an algorithm?

Let's use T(n) as the total time in function of the input size n , and t as the time complexity taken by a statement or group of statements. T(n) = t(statement1) + t(statement2) + ... + t(statementN); If each statement executes a basic operation, we can say it takes constant time O(1) .

How could you determine the complexity cases is either best AVG or worst?

Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n. Average case is the function which performs an average number of steps on input data of n elements.


1 Answers

Let’s consider a related problem. Imagine you have a coin that flips heads with probability p. How many times, on expectation, do you need to flip the coin before it comes up heads? The answer is 1/p, since

  • There’s a p chance that you need one flip.
  • There’s a p(1-p) chance that you need two flips (the first flip has to go tails and the second has to go heads).
  • There’s a p(1-p)^2 chance that you need three flips (the first two flips need to go tails and the third has to go heads)
  • ...
  • There’s a p(1-p)^(k-1) chance that you need k flips (the first k-1 flips need to go tails and the kth needs to go heads.)

So this means the expected value of the number of flips is

p + 2p(1 - p) + 3p(1 - p)^2 + 4p(1 - p)^3 + ...

= p(1(1 - p)^0 + 2(1 - p)^1 + 3(1 - p)^2 + ...)

So now we need to work out what this summation is. The general form is

p sum from k = 1 to infinity (k(1 - p)^k).

Rather than solving this particular summation, let's make this more general. Let x be some variable that, later, we'll set equal to 1 - p, but which for now we'll treat as a free value. Then we can rewrite the above summation as

p sum from k = 1 to infinity (kx^(k-1)).

Now for a cute trick: notice that the inside of this expression is the derivative of x^k with respect to x. Therefore, this sum is

p sum from k = 1 to infinity (d/dx x^k).

The derivative is a linear operator, so we can move it out to the front:

p d/dx sum from k = 1 to infinity (x^k)

That inner sum (x + x^2 + x^3 + ...) is the Taylor series for 1 / (1 - x) - 1, so we can simplify this to get

p d/dx (1 / (1 - x) - 1)

= p / (1 - x)^2

And since we picked x = 1 - p, this simplifies to

p / (1 - (1 - p))^2

= p / p^2

= 1 / p

Whew! That was a long derivation. But it shows that the expected number of coin tosses needed is 1/p.

Now, in your case, your algorithm can be thought of as tossing mn coins that come up heads with probability p and stopping if any of them come up heads. Surely, the expected number of coins you’d need to toss won’t be more than the case where you’re allowed to flip infinitely often, so your expected runtime is at most O(1 / p) (assuming p > 0).

If we assume that p is independent of m and n, then we can notice that at after some initial growth, each added term into our summation as we increase the number of flips is exponentially lower than the previous ones. More specifically, after adding in roughly logarithmically many terms into the sum we’ll be off from the total in the case of the infinite summation. Therefore, provided that mn is roughly larger than Θ(log p), the sum ends up being Θ(1 / p). So in a big-O sense, if mn is independent of p, the runtime is Θ(1 / p).

like image 192
templatetypedef Avatar answered Nov 11 '22 02:11

templatetypedef