As they say, "true quicksort sorts in-place". So the standard short Haskell code for quicksort,
quicksort :: Ord a => [a] -> [a]
quicksort [] = []
quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater)
where
lesser = filter (< p) xs
greater = filter (>= p) xs
what algorithm/computational process is it describing, after all?
It surely isn't what Tony Hoare devised, lacking its most defining feature, the in-place partitioning algorithm.
(the answer might be well known, but not yet here on SO).
correction: this question is in fact a duplicate: the answer is known on SO after all: cf. Pseudo-quicksort time complexity .
Yup, this is quicksort, just not in-place. It matches the high-level algorithm for quicksort whilst changing the low-level implementation to match the data structure of linked lists. That's why it's quicksort for linked lists.
I'd prefer to say "quicksort was originally developed to work in-place" than "true quicksort is done in-place". There are many variants of quicksort including picking pivots randomly to avoid worse-case behaviour etc.. This is a sensible, clear definition of quicksort for linked lists.
This definition exactly matches how we teach quicksort to 16 year-old maths students in the UK. (We're teaching algorithms, not programming.) In-place very much obscures the purpose and design, which is why we don't teach that detail, despite being a million miles from teaching functional programming or linked lists. (That doesn't change the fact that the pair-swapping trick in-place algorithm is best when you have destructive update arrays.)
There is a time penalty to this definition, since it traverses the list twice for the two sublists. It's certainly possible to rewrite this to partition rather than filter, but I assert that that's optimising rather than changing the fundamental algorithm here, quicksort.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With