Son of Darts Problem was a contest on Al Zimmermann's Programming Contests that ended on 20 Jun 2010 :
Suppose that you have a dartboard that is divided into R regions. Each dartboard region has a positive integer value associated with it. Further suppose that you have D darts and that you throw each of them at the dartboard. Each dart either lands in one of the board's R regions or misses the board altogether. Your score is the sum of the values for the regions in which the darts land. A dart that misses the board contributes nothing to your score. If multiple darts land in the same region, you accumulate the value for that region multiple times.
For example, suppose that R = 5, that the dartboard regions have values (1, 2, 4, 7, 11), and that D = 3. If your three darts land in regions 2, 4 and 11 you score 17 points. If one dart misses the board and the other two land in region 7 you score 14 points.
The Darts Problem is this: for a given R and D, determine what values should be associated with a dartboard's R regions in order to maximize the smallest score unattainable by throwing D darts.
D = number of darts R = number of dartboard regions 3 1 through 40 4 1 through 30 5 1 through 20 6 1 through 10
==================================================================================
BASIC ALGORITHM USED (explained for D = 3
)
I start with the result array shown below. 0 and 1 are the scores that should be there on the regions of the dartboard. 0 indicates that the dart missed the board. So, I am going to generate this array for 41 elements (one extra to compensate for 0). 1 is compulsary because otherwise there is no other way to generate 1.
____ ____ | | | | 0 | 1 | |____|____|
I generate the scratch array which shows what all totals are achievable using the dart scores in the result array, in three throws. The elements underlined are the ones that were used to generate the scratch.
0 = 0 + 0 + 0 1 = 1 + 0 + 0 2 = 1 + 1 + 0 3 = 1 + 1 + 1 ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ | * | * | * | * | | | | | | | | | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |__-_|__-_|____|____|____|____|____|____|____|____|____|____|____|
Now the candidates for the next element in the result array are 2, 3 or 4.
2 = element one greater than the current largest element
4 = the smallent unachievable element
I try each of these candidates in turn and see that which is the maximum of smallest unachievable elements in each case. For example
(0, 1, 2)
____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ | * | * | * | * | * | * | * | | | | | | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |__-_|__-_|__-_|____|____|____|____|____|____|____|____|____|____|
(0, 1, 3)
____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ | * | * | * | * | * | * | * | * | | * | | | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |__-_|__-_|____|__-_|____|____|____|____|____|____|____|____|____|
(0, 1, 4)
____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ | * | * | * | * | * | * | * | | * | * | | | * | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |__-_|__-_|____|____|__-_|____|____|____|____|____|____|____|____|
max (7, 8, 7) = 8
. So, 3 is chosen as the next element.
If suppose there was a tie, for example, if I had to choose between (0, 1, 2) and (0, 1, 4), to break the tie I would count the number of achievable numbers which is more in case of (0, 1, 4). So, I would choose (0, 1, 4) over (0, 1, 2).
But in this case, (0, 1, 3) is the winner and the result set looks like below. Then, I repeat the steps until I have 41 elements.
____ ____ ____ | | | | | 0 | 1 | 3 | |____|____|____|
The algorithm is greedy in the sense that it assumes that (solution for R = 20) is a subset of (solution for R = 30). So, I do not calculate results for each value of R, but I do calculate results for each value of D. So, for D = 3, I cauculate result for R = 40 and then take the first 30 elements of the result for R = 30, for example.
The algorithm is greedy in the sense that it assumes that the target at each step (in creating the result array) is to achieve the minimum unachievable total at the stage.
To be able to perform better than brute force, the algorithm eliminates some candidates for the result array. For example, in the case depicted in the diagram below (for a (0, 1, 4) result array), I only consider 5, 6 and 7 as candidates for the next element and exclude 2 and 3 though they are still not used. This assumption might give me suboptimal results in some cases, but it does cut down on a lot of calculation when scratch grows to thousands of elements.
____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ | * | * | * | * | * | * | * | | * | * | | | * | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |__-_|__-_|____|____|__-_|____|____|____|____|____|____|____|____|
IMPROVEMENTS TO THE ALGORITHM
Since this was not giving too good results, I tried generating sets of results for each value of D. For example, instead of choosing the best next element for result, I could also choose the second best or the third best element. So, for 39 steps to go (to 41 elements in result), I could generate around 3^39 (for each choice, I can have either the best or the second best or the third best element) cases which is too many. So, I decided to go with atmost one second best and atmost one third best choices. That reduced the number of cases to around 40C1 * 39C1 = 577980 results. Any more than this would lead to HUGE number of cases for higher values of R (for higher values of D).
Out of these ~6E5 results that I have, I calculate the minimum unachievable element for each value of R from 1 to 40. So, each of the R values gets to choose the best from these 6E5 values.
PROBLEMS
This algorithm does not produce the best result sets, or even close.
I even went to fourth and fifth best choices for D = 3, and they did not give any major improvements in the results. So, I did not know where I should tune my algorithm.
Where all can I tune this algorithm to get better results?
Are there any obvious mistakes that the algorithm makes?
Any comments/suggestions are welcome.
There was another question on the same topic here. I am more interested in knowing where my algorithm can be improved.
I actually participated in this contest, as you notice I placed 100th overall with 87.00 points collected. I actually used your method because I realized generating a "reasonable" solution for every problem was the first hurdle to overcome. At the time I ran it, the points generated were enough to put me up around 94 points but as the top earners improved their scores, that number fell quickly to around 75.
The first and easiest thing you can do is to realize that your program runs in an instant, and if it doesn't you should spend time optimizing the crap out of that first. Once it runs in fast enough time, you can generate every possible set for D = 3
, R <= 5
in no time at all. You can then use the sets at R = 5
as seeds for your greedy algorithm. Now instead of one greedy solution, you have a few thousand, and you just need to keep track of the highest value generated at each level R and the values which create it. That is easy enough to do, and this will get your score up above 80.
I spent almost a month optimizing the function which can test any random input set so that I was able to pump up my seed generator to R = 10
and run it in about 8 hours on a single core. This guaranteed having the best possible solution for 'D = 3', 'R <= 10' and much better answers when R > 10
than I had with the original greedy result. This got my score pretty close to where it ended after running R
up as high as possible for each D
while still being able to run the program in a single day. I was able to reach R = 9
for D = 4
, R = 8
for D = 5
, and R = 8
for D = 6
.
After these were run I calculated that I wouldn't be able to increase R
by 1 for any D
over the numbers just listed and have it complete its execution in my lifetime. Obviously it was time to look for a new technique. I figured I was doing a great job on the front end by test every possible starting segment, so why not shift some of that to the back end by testing deeper result sets for each of my starting segments. Instead of grabbing the best next result on the same level, I performed a 2 level look ahead and take the best next value from two levels deep. For instance, you always start with a 1, then you enumerate all values for R = 2 (2, 3, 4)
and then for each of those, evaluate all possible values for R = 3
. So 2 (3, 4, 5, 6, 7), 3 (4, 5, 6, 7, 8), 4 (5, 6, 7)
. Take the highest of all those evaluations, and then choose the value at R = 2
which contains the highest value for R = 3
. This required a lot more processing time and required me to lower the max R
I could use to seed it, but it produced better results for the deeper searches.
At this point I gave up on greedy approaches and used this competition to expand my AI knowledge. I tried using various monte carlo techniques as well as a basic genetic algorithm. I learned a lot about monte carlo, and in looking up some of the top performers in this competition, found publications on optimizations for monte carlo selection criteria which was beyond my understanding. I wish I could help you out further, but I feel confident in arguing that breaking 90 points with a greedy approach is very unlikely in my lifetime. I would be interested to see how much better the answers would get if it were multithreaded and a bunch of power was thrown at it.
I don't have any of the work I did for this problem with me, but I remember showing that look ahead and greater enumeration of starting seeds were the only two possible improvements to the greedy algorithm for this problem. I'll for that stuff tonight and post the thought process here if I can find the notes.
EDIT: code originally posted on server which has been abandoned. Please message if you'd like it to be re-posted.
Thanks for a very interesting question! I spent a few minutes understanding the question.
I did not see a notational formulation of the problem, so, let me give a shot at coming up with a notation here.
Firstly, an observation (as you made correctly as well), one of the regions must be 1, otherwise 1 would not be attainable.
Secondly, since we are trying to MAXIMIZE the unattainable score, no point in repeating region values.
So, that gives a degenerate (but not optimal) solution: 1, 2, 3, ... R
The objective function value of this degenerate solution is: D*R+1
For example if you have D = 4 darts, and you color your dartboard with scores 1, 2. ..40, then every score from values 1 to 160 is attainable. 161 is not attainable.
Clearly this solution is not optimal, and optimal solution will involve possibly some powers of 2 and definitely some thought.
Now for the notation:
Let f(X,D,{Y_1..Y_R}) =
As an example discussed earlier. f(160,4,{1..40}) = 1 and f(161,4,{1..40}) = 0
The objective function value of puzzle can then be denoted as:
g(D, (Y_1..Y_R}) = Min X | f(X,D,{Y_1..Y_R}) = 0
By observation 1 earlier, we can assume that Y_1 = 1.
Also, a recursive formulation of function f can be as follows.
f(X,D,{1..Y_R} = 1 if:
[Will continue to work on this and post more as I develop it.]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With