Suppose you had a black-box subroutine that could extract the max from an array of n elements in (log n)^a
time, where 0 <= a <= 1
. You're trying to create an optimal sorting algorithm that makes use of this subroutine.
The obvious solution is to call the subroutine on the entire array, swap the max with the last element, and then call the sub-routine iteratively on A[1, n-1]
through A[1, 2]
.
Is there a better algorithm that runs faster than n*(log n)^a
time, or is the obvious solution optimal?
No. In expectation, we need Ω(n log n) bits from the black box to sort n items. When called on an array of size k, the black box runs for (log k)a steps and returns about log k bits, for a rate of about (log k)1 - a bits per step. This rate is upperbounded by (log n)1 - a, so the obvious algorithm is asymptotically optimal.
I don't know the exact answer, but here's some results that hint the answer might be the naive one:
Suppose we divide the input into 4 pieces (4 can be substituted by k);
Sorting on each of the 4 pieces takes n/4*(log(n/4)^a), combining the results need (n/2+n/2+n) = 2n;
n/4*(log(n/4)^a) * 4 = n(logn^a)-n/4*(log4)^a,
total time = n(logn^a) - n/4*(log4)^a + 2n
However, if a = 1, rhs = n(log(n)^a); if a < 1, rhs > n(log(n)^a).
So even considering from a real world perspective rather than the Big-Oh perspective, the divide & conquer approach can only slow it down if a<1 and there are no benefits when a=1.
I don't know if there are other tricks, however. Hope this could at least provoke some ideas!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With