Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pseudo-quicksort time complexity

I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a suitably high level of abstraction) that is often used to demonstrate the conciseness of functional languages is as follows (given in Haskell):

quicksort :: Ord a => [a] -> [a]
quicksort []     = []
quicksort (p:xs) = quicksort [y | y<-xs, y<p] ++ [p] ++ quicksort [y | y<-xs, y>=p]

Okay, so I know this thing has problems. The biggest problem with this is that it does not sort in place, which is normally a big advantage of quicksort. Even if that didn't matter, it would still take longer than a typical quicksort because it has to do two passes of the list when it partitions it, and it does costly append operations to splice it back together afterwards. Further, the choice of the first element as the pivot is not the best choice.

But even considering all of that, isn't the average time complexity of this quicksort the same as the standard quicksort? Namely, O(n log n)? Because the appends and the partition still have linear time complexity, even if they are inefficient.

like image 511
Ord Avatar asked Jul 06 '12 03:07

Ord


People also ask

What is the time complexity of quicksort?

Time Complexity The best-case time complexity of quicksort is O(n*logn). Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly ascending and not properly descending. The average case time complexity of quicksort is O(n*logn).

What is the big O complexity of Qsort?

Quicksort is a logarithmic-time algorithm, in other words, it has a Big O notation of O(log n)-(more about Big O Notation)- and depending on the way you implement it, it can be up to 2x or even 3x faster than Merge Sort or Heap Sort.

What is quick sort pseudocode?

Quick Sort PseudocodeIf an element is less than the pivot element then increment p and swap the elements at index p with the element at index i. Once all the elements are traversed, swap pivot with element present at p+1 as this will the same position as in the sorted array.

Why time complexity of quick sort is N 2?

The worst case time complexity of a typical implementation of QuickSort is O(n2). The worst case occurs when the picked pivot is always an extreme (smallest or largest) element. This happens when input array is sorted or reverse sorted and either first or last element is picked as pivot.


1 Answers

This "quicksort" is actually deforested tree sort: http://www.reddit.com/r/programming/comments/2h0j2/real_quicksort_in_haskell

data Tree a = Leaf | Node a (Tree a) (Tree a)

mkTree [] = Leaf
mkTree (x:xs) = Node x (mkTree (filter (<= x) xs)) (mkTree (filter (x <) xs))

Binary tree is unbalanced, so O(N^2) worst-case and O(N*Log N) average-case complexity for building search tree.

foldTree f g Leaf = g
foldTree f g (Node x l r) = f x (foldTree f g l) (foldTree f g r)

treeSort l = foldTree (\x lft rht -> lft++[x]++rht) [] (mkTree l)

Retrieval algorithm have O(N^2) worst-case and O(N*Log N) average-case complexity.

Well-balanced:

Prelude> let rnds = iterate step where step x = (75*x) `mod` 65537
Prelude> length . quicksort . take 4000 . rnds $ 1
4000
(0.08 secs, 10859016 bytes)
Prelude> length . quicksort . take 8000 . rnds $ 1
8000
(0.12 secs, 21183208 bytes)
Prelude> length . quicksort . take 16000 . rnds $ 1
16000
(0.25 secs, 42322744 bytes)

Not-so-well-balanced:

Prelude> length . quicksort . map (`mod` 10) $ [1..4000]
4000
(0.62 secs, 65024528 bytes)
Prelude> length . quicksort . map (`mod` 10) $ [1..8000]
8000
(2.45 secs, 241906856 bytes)
Prelude> length . quicksort . map (`mod` 10) $ [1..16000]
16000
(9.52 secs, 941667704 bytes)
like image 73
klapaucius Avatar answered Oct 11 '22 03:10

klapaucius