Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it possible to implement quicksort with O(1) space complexity?

From what I understood in Wikipedia's explanation of quicksort's space complexity, quicksort's space complexity comes from its recursive nature. I'm curious as to whether it's possible to implement quicksort non-recursively and, in doing so, implement it with constant space complexity.

like image 213
Daniel Avatar asked Jul 12 '12 15:07

Daniel


People also ask

Can quicksort be done with constant space?

Quicksort with in-place and unstable partitioning uses only constant additional space before making any recursive call. Quicksort must store a constant amount of information for each nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it uses O(log n) space.

What is a big O time complexity of quick sort for the worst case?

The worst case time complexity of a typical implementation of QuickSort is O(n2). The worst case occurs when the picked pivot is always an extreme (smallest or largest) element. This happens when input array is sorted or reverse sorted and either first or last element is picked as pivot.

Why is space complexity of quicksort O log n?

Since quicksort calls itself on the order of log(n) times (in the average case, worst case number of calls is O(n)), at each recursive call a new stack frame of constant size must be allocated. Hence the O(log(n)) space complexity.


1 Answers

Wikipedia is not always wrong. And, as the section suggests, there is a way to do the quicksort, or something similar, using constant space. One important point. Quicksort itself could be defined as a recursive partitioning algorithm. If so, then by definition it will require O(n) stack space. However, I'm assuming that you are not using such a pedantic definition.

Just a quick review of how the partitioning works. Given an array, a starting point and an ending point, a partition value is chosen. The data elements in the array are then split so everything less than the partition value is on the left and everything greater is on the right. A good way of doing this is by starting at each end, finding the first value that doesn't belong, and swapping them. This, by the way, uses constant space.

So, each step of the algorithm is going through the array. Let's remember this fact.

Now, we can make an interesting observation. If we do the recursive partitioning in a depth-first fashion, then we only have to store the end points of each range. On the way down, the left edge of the array is always the beginning. The end point gets successively close to the beginning, until there are just two elements that can be swapped, or not. At this point, the beginning moves over two slots, but we don't know the end. So, look up the end and continue the process. Then at the next step "up", we need the next end point, and so on.

The question is: Can we find the end by some means other than storing the actual value in a stack?

Well, the answer is "yes".

Each step in the recursive partitioning algorithm reads through all the data. We can do some additional calculations on the data. In particular, we can calculate the largest value and the second largest value. (I would also calculate the smallest value as well, but that is an optimization.).

What we do with the values is mark the ranges. On the first split, this means putting the second largest value at the split point and the largest value at the end of the range. On the way back up the tree, you know where the range starts. The end of the range is the first value larger than that value.

Voila! You can move up the "recursion" tree without storing any data. You are just using the data as presented.

Once you have accomplished this, you simply need to change the algorithm from a recursive algorithm to a while loop. The while loop rearranges the data, setting a starting point and stopping point at each step. It chooses a splitter, splits the data, marks the starting and ending point, and then repeats on the left side of the data.

When it has gotten down to the smallest unit, it then check whether it is done (has it reached the end of the data). If not, it looks at the data point one unit over to find the first marker. It then goes through the data to look for the end point. This search, by the way, is equivalent in complexity to the partitioning of the data, so it does not add to the order of complexity. It then iterates through this array, continuing the process until it is done.

If you have duplicates in the data, the process is slightly more complex. However, if there are log(N) duplicates, I would almost argue for removing the duplicates, sorting the data using the remaining slots as a stack, and then incorporating them back in.

Why this is quicksort. The quicksort algorithm is a partition exchange algorithm. The algorithm proceeds by choosing a splitter value, partitioning the data on the two sides, and repeating this process. Recursion is not necessary, as Jeffrey points out in his answer. It is a great convenience.

This algorithm proceeds in exactly the same way. The partitioning follows the same underlying rule, with smaller records on the left and larger records on the right. The only difference is that within each partition, particular values are chosen to be on the edges of the partition. By careful placement of these values, no additional "per-step" storage is needed. Since these values belong in the partition, this is a valid partition according to the quicksort principal of partition-and-repeat.

If one argues that a quicksort must use recursion, then this would fail that strict test (and the answer to the original question is trivial).

like image 138
Gordon Linoff Avatar answered Sep 24 '22 07:09

Gordon Linoff