I have a trivial quicksort implementation that goes by:
template <typename IteratorType>
void quicksort(IteratorType begin, IteratorType end)
{
if (begin != end)
{
const auto pivot = *(begin + distance(begin, end) / 2);
const IteratorType sep = std::partition(begin, end, [pivot](typename IteratorType::value_type v){ return (v < pivot); });
if (sep != begin)
{
quicksort(begin, sep);
}
if (sep != end)
{
quicksort(sep + 1, end);
}
}
}
Testing it on a 1000000 elements array takes about forever (6300 ms) before sometimes dying of recursion, while std::sort
takes like 30 ms.
Surely I don't expect my crappy implementation to be as fast as std::sort
but how can there be such a huge difference ?
I understand std::sort
uses something more complicated than a simple quicksort (I believe it is introsort) that prevents going too far the recursion level and stuff. But still, is there something obvious I'm missing that could explain such a huge difference ?
Varying the size of the arraw shows that the difference factor is not constant, actually it seems to grow like n²
.
Check out better pivot selection first (commonly, median of 3 is used) and eliminate one branch of the recursion to save stack space.
Pivot selection has the biggest impact on the overall algorithmic performance, since it makes the difference between N*log(n) and N^2.
Assuming the code is correct (quicksort can be tricky) then I guess the big difference is that you are not using a faster sort when the number of elements is small. For instance it's common to use selection sort when the number of elements to be sorted is less than some smallish number.
That rinky-dink C++11 code makes me suspicious as well, although I'll freely admit knowing nothing about it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With