I have an array of N numbers which are same.I am applying Quick sort on it. What should be the time complexity of the sorting in this case.
I goggled around this question but did not get the exact explanation.
Any help would be appreciated.
Worst-case analysis This may occur if the pivot happens to be the smallest or largest element in the list, or in some implementations (e.g., the Lomuto partition scheme as described above) when all the elements are equal. , so in that case quicksort takes O(n2) time.
The worst case time complexity of a typical implementation of QuickSort is O(n2). The worst case occurs when the picked pivot is always an extreme (smallest or largest) element. This happens when input array is sorted or reverse sorted and either first or last element is picked as pivot.
Quicksort is a logarithmic-time algorithm, in other words, it has a Big O notation of O(log n)-(more about Big O Notation)- and depending on the way you implement it, it can be up to 2x or even 3x faster than Merge Sort or Heap Sort.
Quicksort uses ~2 N ln N compares (and one-sixth that many exchanges) on the average to sort an array of length N with distinct keys.
This depends on the implementation of Quicksort. The traditional implementation which partitions into 2 (<
and >=
) sections will have O(n*n)
on identical input. While no swaps will necessarily occur, it will cause n
recursive calls to be made - each of which need to make a comparison with the pivot and n-recursionDepth
elements. i.e. O(n*n)
comparisons need to be made
However there is a simple variant which partitions into 3 sets (<
, =
and >
). This variant has O(n)
performance in this case - instead of choosing the pivot, swapping and then recursing on 0
to pivotIndex-1
and pivotIndex+1
to n
, it will put swap all things equal to the pivot to the 'middle' partition (which in the case of all identical inputs always means swapping with itself i.e. a no-op) meaning the call stack will only be 1 deep in this particular case n comparisons and no swaps occur. I believe this variant has made its way into the standard library on linux at least.
The performance of quicksort depends on the pivot selection. The closer the chosen pivot is to the median element, the better is quicksort's performance.
In this specific case you're lucky - the pivot you select will always be a median, since all values are the same. The partition step of quicksort will hence never have to swap elements, and the two pointers will meet exactly in the middle. The two subproblems will have therefore be exactly half the size - giving you a perfect O(n log n)
.
To be a little more specific, this depends on how well the partition step is implemented. The loop-invariant only needs to make sure that smaller elements are in the left-hand sub-problem, while greater elements are in the right-hand sub-problem. There's no guarantee that a partition implementation never swaps equal elements. But it is always unnecessary work, so no clever implementation should do it: The left
and right
pointers will never detect an inversion respective the pivot (i.e. you will never hit the case where *left > pivot && *right < pivot
) and so the left
pointer will be incremented, the right
pointer will be decremented every step and they will finally meet in the middle, generating subproblems of size n/2
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With