I have learnt that the space complexity of quick sort without Sedgewick's trick of eliminating tail recursion is O(n). But if we trace the calls on the stack that are stored, it is O(log n) steps at any call as shown in the figure.
In the figure,
while calculating the value of (1,1) we store the calls of [(1,8), (1,4), (1,2)] ,
while calculating the value of (3,3) we store the calls of [(1,8), (1,4), (3,4)] and so on
which constitute for only O(log n) space at ant point of time. Then does the complexity become O(n) ?
In the tree example you gave above, you showed a run of quicksort that always happens to pick the exact median element as the splitting point at each step. That makes the recursion depth O(log n) and so, as you noted, the space usage would be O(log n) even without the optimization.
But what happens if you get a bad run of quicksort? That is, what happens if you always pick the absolute biggest or absolute smallest element of the array as the pivot at each point? Then your recursion tree will look something like this:
size n
\
size n-1
\
size n-2
\
...
\
1
Now your recursion tree has height Θ(n), so if implemented without any tail call elimination quicksort will use Θ(n) space, one per active recursive call at each point.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With