According to the book I'm reading, interpolation search takes O(loglogn)
in average case.
The book assumes that each compare reduce the length of the list from n
to sqrt(n)
. Well, it isn't difficult to work out the O(loglogn)
given this assumption.
However, the book didn't talk more about this assumption except that it says this is correct.
Question: can anyone give some explanation on why this is true?
It depends on the input being uniformly distributed (without such an assumption, O(log n) is the best you can do theoretically, ie binary search is optimal). With a uniform distribution, the variance is around sqrt(n), and in the expected case each iteration hits within the variance of the target. Thus, as you say, the search space goes from n -> sqrt(n) on each iteration.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With