Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

quicksort stack size

Why do we prefer to sort the smaller partition of a file and push the larger one on stack after partitioning for quicksort(non-recursive implementation)? Doing this reduces the space complexity of quicksort O(log n) for random files. Could someone elaborate it?

like image 527
mohit Avatar asked Jul 15 '11 15:07

mohit


People also ask

Does Quicksort use stack?

Quick sort makes two recursive calls each time. So the trick is, transform one into tail recursion and then into iteration, so you need to push only one onto stack.

Does Quicksort use extra space?

Although quicksort doesn't use auxiliary space to store array elements, additional space is required for creating stack frames in recursive calls. This happens when the pivot element is the largest or smallest element of the array in every recursive call. The size of the subarray after partitioning will be n-1 and 1.

What is the recursive depth of Quicksort?

Define the recursion depth of QuickSort to be the maximum number of successive recursive calls before it hits the base case — equivalently, the number of the last level of the corresponding recursion tree. Note that the recursion depth is a random variable, which depends on which pivots get chosen.

Is Quicksort a space complexity?

Solution: Quicksort has a space complexity of O(logn), even in the worst case.


1 Answers

As you know, at each recursive step, you partition an array. Push the larger part on the stack, continue working on the smaller part.

Because the one you carry on working with is the smaller one, it is at most half the size of the one you were working with before. So for each range we push on the stack, we halve the size of the range we're working with.

That means we can't push more than log n ranges onto the stack before the range we're working with hits size 1 (and therefore is sorted). This bounds the amount of stack we need to complete the first descent.

When we start processing the "big parts", each "big part" B(k) is bigger than the "small part" S(k) produced at the same time, so we might need more stack to handle B(k) than we needed to handle S(k). But B(k) is still smaller than the previous "small part", S(k-1) and once we're processing B(k), we've taken it back off the stack, which therefore is one item smaller than when we processed S(k), and the same size as when we processed S(k-1). So we still have our bound.

Suppose we did it the other way around - push the small part and continue working with the large part. Then in the pathologically nasty case, we'd push a size 1 range on the stack each time, and continue working with a size only 2 smaller than the previous size. Hence we'd need n / 2 slots in our stack.

like image 171
Steve Jessop Avatar answered Sep 22 '22 14:09

Steve Jessop