Based on this radix sort article http://www.geeksforgeeks.org/radix-sort/ I'm struggling to understand what is being explained in terms of the time complexity of certain methods in the sort.
From the link:
Let there be d digits in input integers. Radix Sort takes O(d*(n+b)) time where b is the base for representing numbers, for example, for decimal system, b is 10. What is the value of d? If k is the maximum possible value, then d would be O(log_b(k)). So overall time complexity is O((n+b)*logb(k)). Which looks more than the time complexity of comparison based sorting algorithms for a large k. Let us first limit k. Let k≤nc where c is a constant. In that case, the complexity becomes O(nlogb(n)).
So I do understand that the sort takes O(d*n) since there are d digits therefore d passes, and you have to process all n elements, but I lost it from there. A simple explanation would be really helpful.
Assuming we use bucket sort for the sorting on each digit: for each digit (d)
, we process all numbers (n)
, placing them in buckets for all possible values a digit may have (b)
.
We then need to process all the buckets, recreating the original list. Placing all items in the buckets takes O(n)
time, recreating the list from all the buckets takes O(n + b)
time (we have to iterate over all buckets and all elements inside them), and we do this for all digits, giving a running time of O(d * (n + b))
.
This is only linear if d
is a constant and b
is not asymptotically larger than n
. So indeed, if you have numbers of log n
bits, it will take O(n log n)
time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With