I recently heard about the Library sort and since I have to make my students work on the Insertion sort (from which the Library sort is derived), I decided to build an exercise for them on this new topic. The great thing is that this algorithm claims to have a O(n log n) complexity (see the title Insertion Sort is O(n log n) or the text in the Wikipedia page from the link above).
I am aware that empirical measurements aren't always reliable but I tried to do my best and I am a little disappointed by the following plot (blue is the Library sort, green is the in-place quicksort from Rosetta Code); vertical axes is the average time computed as the mean of many different attempts; horizontal axes is the size of the list. Random lists of size n have integer elements between 0 and 2n. The shape of the curve doesn't really look as something related to O(n log n).
Here is my code (including the test part); did I miss something?
# -*- coding: utf8 -*-
def library_sort(l):
# Initialization
d = len(l)
k = [None]*(d<<1)
m = d.bit_length() # floor(log2(n) + 1)
for i in range(d): k[2*i+1] = l[i]
# main loop
a,b = 1,2
for i in range(m):
# Because multiplication by 2 occurs at the beginning of the loop,
# the first element will not be sorted at first pass, which is wanted
# (because a single element does not need to be sorted)
a <<= 1
b <<= 1
for j in range(a,min(b,d+1)):
p = 2*j-1
s = k[p]
# Binary search
x, y = 0, p
while y-x > 1:
c = (x+y)>>1
if k[c] != None:
if k[c] < s: x = c
else: y = c
else:
e,f = c-1,c+1
while k[e] == None: e -= 1
while k[f] == None: f += 1
if k[e] > s: y = e
elif k[f] < s: x = f
else:
x, y = e, f
break
# Insertion
if y-x > 1: k[ (x+y)>>1 ] = s
else:
if k[x] != None:
if k[x] > s: y = x # case may occur for [2,1,0]
while s != None:
k[y], s = s, k[y]
y += 1
else: k[x] = s
k[p] = None
# Rebalancing
if b > d: break
if i < m-1:
s = p
while s >= 0:
if k[s] != None:
# In the following line, the order is very important
# because s and p may be equal in which case we want
# k[s] (which is k[p]) to remain unchanged
k[s], k[p] = None, k[s]
p -= 2
s -= 1
return [ x for x in k if x != None ]
def quicksort(l):
array = list(l)
_quicksort(array, 0, len(array) - 1)
return array
def _quicksort(array, start, stop):
if stop - start > 0:
pivot, left, right = array[start], start, stop
while left <= right:
while array[left] < pivot:
left += 1
while array[right] > pivot:
right -= 1
if left <= right:
array[left], array[right] = array[right], array[left]
left += 1
right -= 1
_quicksort(array, start, right)
_quicksort(array, left, stop)
import random, time
def test(f):
print "Testing", f.__name__,"..."
random.seed(42)
x = []
y = []
for i in [ 25, 50, 100, 200, 300, 500, 1000, 5000,
10000, 15000, 25000, 40000, 50000, 100000 ]:
n = 100000//i + 1
m = 2*i
x.append(i)
t = time.time()
for j in range(n):
f( [ random.randrange(0,m) for _ in range(i) ] )
y.append( (time.time()-t)/n )
return x,y
import matplotlib.pyplot as plt
x1, y1 = test(library_sort)
x2, y2 = test(quicksort)
plt.plot(x1,y1)
plt.plot(x2,y2)
plt.show()
Edit: I computed more values, switched to the pypy interpreter in order to compute a little more in the same time; here is another plot; I also added reference curves. It is difficult to be sure of it, but it still looks a little more like O(n²) than O(n log n)...
Link to a pdf file that uses a different algorithm for what it also describes as a library sort on page 7:
http://classes.engr.oregonstate.edu/eecs/winter2009/cs261/Textbook/Chapter14.pdf
The pdf file describes a hybrid counting and insertion sort, which seems significantly different than the wiki article, even though the pdf file mentions the wiki article. Because of the counting phase, the so called gaps are exactly sized so that the destination array is the same size as the original array, rather than having to be larger by some constant factor as mentioned in the wiki article.
For example, one method to perform what the pdf file calls a library sort on an array of integers, using the most significant byte of each integer, an array of counts == array of bucket sizes is created. These are converted into an array of starting indexes for each bucket by accumulative summing. Then the array of starting indexes is copied to an array of ending indexes for each bucket (which would mean all buckets are initially empty). The sort then starts, for each integer from the source arrray, select the bucket via the most significant byte, then do an insertion sort based on the starting and ending indexes for that bucket, then increment the ending index.
The pdf file mentions using some type of hash to select buckets for generic objects. The hash would have to preserve the sort order.
My guess is that the time complexity would be the time to do the insertion sorts, which is O(bucket_size^2) for each bucket, since the pass to create the counts / bucket indexes is linear.
For integers, since most of the logic for counting-radix sort is already there, might as well perform a multi-pass least significant to most significant byte counting-radix sort and forget about doing the insertion sort.
Getting back to the wiki article, there's no explanation on how to detect an empty gap. Assuming that no sentinel value to represent an empty gap is available, then a third array could be used to indicate if a location in the second array contained data or was empty.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With