The ternary search algorithm is a fast algorithm for finding the minimum or maximum of a unimodal function, a function that either increases and then decreases or decreases and then increases. Assume that we are dealing with a function that decreases, then increases, and that we want to find the minimum of value. Ternary search works using the following recursion:
This algorithm works quickly because it can keep tossing out 1/3 of the values on each iteration.
However, I feel like I'm missing something because I believe we can make this algorithm run much faster. In particular, notice that we're always throwing out one of the thirds of the range between a boundary and one of the probe points. This means that we retain the region between the probe point and the other boundary. Because ternary search picks the probe points at the 1/3 points, this means that we retain 2/3 of the values at each point. What if instead of probing at the 1/3 and 2/3 points, we probed at the 1/2 - ε and 1/2 + ε points for an extremely small ε? This would mean that we would be tossing out 1/2 - ε of the range on each step, which means that the size of the range would decrease much faster than if we were just tossing 1/3 of the elements each time.
As an example, if I pick ε = 1/1000, we get to toss out 999/2000 of the range to search on each iteration. The fraction remaining after some number of iterations is shown here (ternary search is on the left, my algorithm is on the right:)
1 : 1.0 >= 1.0
2 : 0.666666666667 >= 0.5005
3 : 0.444444444444 >= 0.25050025
4 : 0.296296296296 >= 0.125375375125
5 : 0.197530864198 >= 0.0627503752501
6 : 0.131687242798 >= 0.0314065628127
7 : 0.0877914951989 >= 0.0157189846877
8 : 0.0585276634659 >= 0.00786735183621
9 : 0.0390184423106 >= 0.00393760959402
10 : 0.0260122948737 >= 0.00197077360181
11 : 0.0173415299158 >= 0.000986372187705
12 : 0.0115610199439 >= 0.000493679279947
13 : 0.00770734662926 >= 0.000247086479613
14 : 0.00513823108617 >= 0.000123666783046
15 : 0.00342548739078 >= 6.18952249147e-05
16 : 0.00228365826052 >= 3.09785600698e-05
17 : 0.00152243884035 >= 1.55047693149e-05
18 : 0.00101495922690 >= 7.76013704213e-06
19 : 0.000676639484599 >= 3.88394858959e-06
Is this modified version of the algorithm just "better" than the original version? Or is there something I'm missing here that would mean that I shouldn't use the modified strategy for picking the probe points?
This version will certainly converge faster, but might be more prone to floating-point precision issues. For example, what if you get m + eps = m? That is a real possibility if m is large, say. So there is a tradeoff between accuracy and rate of convergence. The best algorithm of this class arguably is Golden Section Search, which is both fast and accurate.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With