I measure the time it takes to sort a list of 10 million entries:
import time
a = range(10000000, 0, -1)
b = range(10000000, 0, -1)
start = time.time()
a.sort()
end = time.time()
print end - start
start = time.time()
sorted(b)
end = time.time()
print end - start
The output I get:
0.272000074387
0.468999862671
The reason might be because sorted is more generic, but this is not consistent with this post
where the difference became miniscule in large lists. What is causing the huge difference?
I am using python 2.7.3 32 bit on 32 bit windows 7, Q6600 processor.
The sort version operates on the list in-place, but sorted makes a copy of the list. When the actual sorting is so easy (Timsort detects that the whole list is one big backwards run and reverses it), the creation of the copy can have a significant runtime impact.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With