Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Expected running time vs. worst-case running time

I am studying the randomized-quicksort algorithm. I realized that the running time of this algorithm is always represented as "expected running time".

What is the reason for specifying or using the "expected running time"? Why don't we calculate the worst- or average-case?

like image 672
minyatur Avatar asked Oct 25 '11 21:10

minyatur


2 Answers

Sometimes the expected running time means the average running time for a randomly chosen input. But if it's a randomized algorithm, then what is often meant is the expected running time with respect to the random choices made by the algorithm, for every input. That is what is meant here. For every input of n items, randomized quicksort runs in time O(n(log n)) on average, averaged only over its coin flips.

In this limited sense, expected running time is a very defensible measure of the running time of a randomized algorithm. If you're only worried about the worst that could happen when the algorithm flips coins internally, then why bother flipping coins at all? You might as well just make them all heads. (Which in the case of randomized quicksort, would reduce it to ordinary quicksort.)

Average case vs worst case is a much more serious matter when it's an average over inputs rather than an average over coin flips. In this case, the average running time is at best a figure that is not adaptable to changes in the type of input --- different uses of the algorithm could have different distributions of inputs. I say at best because you may not know that the hypothetical distribution of inputs is ever the true usage.

Looking at the worst case with respect to coin flips only makes sense when you would both like to run quickly when your coin flips aren't unlucky, and not run too slowly even when your coin flips are unlucky. For instance, imagine that you need a sorting algorithm for the regulator for an oxygen supply (for a medical patient or a scuba diver). Then randomized quicksort would only make sense if you both want the result to be very fast usually, for the user's convenience, AND if the worst case wouldn't suffocate the user no matter what. This is a contrived scenario for sorting algorithms, because there are non-random sorting algorithms (like merge sort) that are not much slower than quicksort even is on average. It is less contrived for a problem like primality testing, where randomized algorithms are much faster than non-randomized algorithms. Then you might want to make a run for it with a randomized algorithm --- while at the same time running a non-randomized algorithm as a backup.

(Okay, you might wonder why an oxygen regulator would want to know whether particular numbers are prime. Maybe it needs to communicate with a medical computer network, and the communication needs to be secure for medical privacy reasons...)

like image 67
Greg Kuperberg Avatar answered Sep 20 '22 18:09

Greg Kuperberg


When we say "expected running time", we are talking about the running time for the average case. We might be talking about an asymptotically upper or lower bound (or both). Similarly, we can talk about asymptotically upper and lower bounds on the running time of the best or worst cases. In other words, the bound is orthogonal to the case.

In the case of randomized quicksort, people talk about the expected running time (O(n log n)) since this makes the algorithm seem better than worst-case O(n^2) algorithms (which it is, though not asymptotically in the worst case). In other words, randomized quicksort is much asymptotically faster than e.g. Bubblesort for almost all inputs, and people want a way to make that fact clear; so people emphasize the average-case running time of randomized quicksort, rather than the fact that it is as asymptotically bad as Bubblesort in the worst case.

As pointed out in the comments and in Greg's excellent answer, it may also be common to speak of the expected runtime with respect to the set of sequences of random choices made during the algorithm's execution on a fixed, arbitrary input. This may be more natural since we think of the inputs as being passively acted upon by an active algorithm. In fact, it is equivalent to speak of an average over random inputs and an algorithm whose execution does not consider structural differences. Both of these formulations are easier to visualize than the true average over the set of pairs of inputs and random choices, but you do get the same answers no matter which way you approach it.

like image 29
Patrick87 Avatar answered Sep 18 '22 18:09

Patrick87