It is mentioned on Wikipedia that this method sorts an array in O(n log n) time, but it is also stable and in-place. That sounds like a very good sorting algorithm, since no other sorting algorithm does all of those at once (Insertion Sort isn't O(n log n), Heap Sort isn't stable, Quicksort (or Introsort) isn't either in place or stable, Mergesort is not in-place). However, on wikipedia only it's name is mentioned and nothing else. As a reference it goes to Franceschini, Gianni (1 June 2007). "Sorting Stably, in Place, with O(n log n) Comparisons and O(n) Moves". Theory of Computing Systems 40 (4): 327–353. However, that doesn't really explain how it actually works, it shows more of why it exists.
My question is how does this method work (what steps does it actually do), and why are there so little resources related to it, considering there are no other known O(n log n) stable in place methods of sorting.
considering there are no other known O(n log n) stable in-place methods of sorting.
It's sufficiently easy to implement merge sort in-place with O(log n) additional space, which I guess is close enough in practice.
In fact there is a merge sort variant which is stable and uses only O(1) additional memory: "Practical in-place mergesort" by Katajainen, Pasanen and Teuhola. It has an optimal O(n log n) runtime, but it is not optimal because it uses Ω(n log n) element move operations, when it can be done with O(n) as demonstrated by the Franceschini paper.
It seems to run slower than a traditional merge sort, but not by a large margin. In contrast, the Franceschini version seems to be a lot more complicated and have a huge constant overhead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With