Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

median of three values strategy

What is the median of three strategy to select the pivot value in quick sort?

I am reading it on the web, but I couldn't figure it out what exactly it is? And also how it is better than the randomized quick sort.

like image 290
Abdul Samad Avatar asked Sep 26 '11 18:09

Abdul Samad


People also ask

Which is the median of three quick sort strategies?

Median of three quick sort is a stable sort. Explanation: Median of three quick sort like standard quick sort is also not a stable sorting algorithm. It is because the elements with the same values are not guaranteed to appear in the same relative order in the output sorted array.

What is quicksort median?

Given an unsorted array arr[] of length N, the task is to find the median of this array. Median of a sorted array of size N is defined as the middle element when n is odd and average of middle two elements when n is even.

What is the quicksort algorithm?

Quicksort is a fast sorting algorithm that works by splitting a large array of data into smaller sub-arrays. This implies that each iteration works by splitting the input into two components, sorting them, and then recombining them.


2 Answers

The median of three has you look at the first, middle and last elements of the array, and choose the median of those three elements as the pivot.

To get the "full effect" of the median of three, it's also important to sort those three items, not just use the median as the pivot -- this doesn't affect what's chosen as the pivot in the current iteration, but can/will affect what's used as the pivot in the next recursive call, which helps to limit the bad behavior for a few initial orderings (one that turns out to be particularly bad in many cases is an array that's sorted, except for having the smallest element at the high end of the array (or largest element at the low end). For example:

Compared to picking the pivot randomly:

  1. It ensures that one common case (fully sorted data) remains optimal.
  2. It's more difficult to manipulate into giving the worst case.
  3. A PRNG is often relatively slow.

That second point probably bears a bit more explanation. If you used the obvious (rand()) random number generator, it's fairly easy (for many cases, anyway) for somebody to arrange the elements so it'll continually choose poor pivots. This can be a serious concern for something like a web server that may be sorting data that's been entered by a potential attacker, who could mount a DoS attack by getting your server to waste a lot of time sorting the data. In a case like this, you could use a truly random seed, or you could include your own PRNG instead of using rand() -- or you use use Median of three, which also has the other advantages mentioned.

On the other hand, if you use a sufficiently random generator (e.g., a hardware generator or encryption in counter mode) it's probably more difficult to force a bad case than it is for a median of three selection. At the same time, achieving that level of randomness typically has quite a bit of overhead of its own, so unless you really expect to be attacked in this case, it's probably not worthwhile (and if you do, it's probably worth at least considering an alternative that guarantees O(N log N) worst case, such as a merge sort or heap sort.

like image 52
Jerry Coffin Avatar answered Oct 02 '22 11:10

Jerry Coffin


Think faster... C example....

int medianThree(int a, int b, int c) {     if ((a > b) ^ (a > c))          return a;     else if ((b < a) ^ (b < c))          return b;     else         return c; } 

This uses bitwise XOR operator. So you would read:

  • Is a greater than exclusively one of the others? return a
  • Is b smaller than exclusively one of the others? return b
  • If none of above: return c

Note that by switching the comparison for b the method also covers all cases where some inputs are equal. Also that way we repeat the same comparison a > b is the same as b < a, smart compilers can reuse and optimize that.

The median approach is faster because it would lead to more evenly partitioning in array, since the partitioning is based on the pivot value.

In the worst case scenario with a random pick or a fixed pick you would partition every array into an array containing just the pivot and another array with the rest, leading to an O(n²) complexity.

Using the median approach you make sure that won't happen, but instead you are introducing an overhead for calculating the median.

EDIT:

Benchmarks results show XOR is 32 times faster than Bigger even though I optimized Bigger a little:

Plot demonstrating benchmarks

You need to recall that XOR is actually a very basic operator of the CPU's Arithmetic Logic Unit (ALU), then although in C it may seem a bit hacky, under the hood it is compiling to the very efficient XOR assembly operator.

like image 43
caiohamamura Avatar answered Oct 02 '22 13:10

caiohamamura