Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Clarification on `failures` solver statistic in MiniZinc

I have been playing around with a simple n-queens model in MiniZinc:

include "globals.mzn";
int: n_queens = 8;
array[1..n_queens] of var 1..n_queens: queens;

constraint alldifferent(queens);
constraint alldifferent(i in 1..n_queens) (queens[i] + i);
constraint alldifferent(i in 1..n_queens) (queens[i] - i);
solve satisfy;

The MiniZinc handbook mentions failures as the "number of leaf nodes that were failed". Following are the statistics after running the model:

%%%mzn-stat: initTime=0.000576
%%%mzn-stat: solveTime=0.000822
%%%mzn-stat: solutions=1
%%%mzn-stat: variables=24
%%%mzn-stat: propagators=19
%%%mzn-stat: propagations=1415
%%%mzn-stat: nodes=47
%%%mzn-stat: failures=22
%%%mzn-stat: restarts=0
%%%mzn-stat: peakDepth=5
%%%mzn-stat-end

There were 22 failures. Being a beginner to constraint programming, my understanding was that the entire purpose of the paradigm is to prune and avoid leaf nodes as much as possible. I am extra confused since the peak depth of the search tree is reported as 5 (not 8).

Am I interpreting these statistics right? If yes, why are there leaf node failures in the model? Will I create a better model by trying to reduce these failures?

like image 334
recentadvances Avatar asked Mar 02 '23 07:03

recentadvances


1 Answers

Those values depend of the search strategy, some times you cannot avoid a leaf node because it hasn't been pruned, that means, nothing before it told the solver that that node was going to be a failure, modeling it in a different way can prevent some failures, and can also prevent suboptimal solutions in the case of optimization problems.

These are the first three nodes that got evaluated on the search tree of the default search strategy of minizinc, I labeled them in the image of the Search Tree in the order they got evaluated, and the 4 and 5 to show the arrival to a feasible solution.

In the the blue dots are nodes where there is still uncertainty, the red squares are failures, white dots are non evaluated nodes, large triangles are whole branches where the search only resulted in failures, the green diamond means a feasible solution, and orange diamonds mean non-best-but-feasible solution (only in optimization problems).

The explanation of each of the labeled nodes is

0: Root node: all variables are un assigned

Nothing has happened, these are all the decision variables and their full domains
queens = array1d(1..8, [[1..8], [1..8], [1..8], [1..8], [1..8], [1..8], [1..8], [1..8]]);

1: First decision

Then it picked the smallest value in the domain of the last variable and made the first split, the solver thought either queens[8] = 1(left child of the root) or queens[8] = [2..8](right child of the root), it will first evaluate queens[8] = 1 and that bring the first node to existence,
queens = array1d(1..8, [[2..7], {2..6,8}, {2..5,7..8}, {2..4,6..8}, {2..3,5..8}, {2,4..8}, [3..8], 1]); where the decision queens[8] = 1 already propagated to the other variables and removed values from its domains.

2: The search continues

Then it again splits at queens[7], this is the left child node where queens[7] = 3, the minimum value in the domain of that variable, and the propagation of that decision to the other variables. queens = array1d(1..8, [{2,4..7}, {2,4..6}, {2,4..5,8}, {2,4,7..8}, {2,6..8}, [5..8], 3, 1]);

In hindsight (more like cheating by looking at the image of the Search Tree) we know that this whole branch of the search will result in failures, but we cannot know that while searching, because there is still uncertainty in some variables, to know that we would have to evaluate all of the possibilities, which are possibly feasible, that might happen or not, hopefully we will find a satisfying solution before that, but before carry on with the search, notice that already some some pruning got done in the form of nodes that won't exist, for example queens[4] can only take the values 2,4,7,8 at this point, and we haven't made any decision on it, its just the solver eliminating values from the variable that it knows will certainly result in failures, if we where making a brute force search this variable would have the same domain as in the root node [1..8] because we haven't made a decision on it yet, so we are making a smarter search by propagating the constraints.

3: First Failure: but we carry on

Carrying on with the same strategy it makes a split for queens[6], this time the minimum value queens[6] = 5, when propagating to the undecided variables, but there is no solution that satisfies all the constraints (here it gave the value 8 to two queens), so this is a dead end and must backtrack.
queens = array1d(1..8, [7, 2, 4, 8, 8, 5, 3, 1]); ---> Failure

So the very first three nodes of the search lead to a failure.

The search continues like that, since the choice for queens[6] = 5 caused a failure it goes to the next value queens[6] = [6..8], that search also results in the failures that are encircled in red in the image of the Search Tree.

As you can probably guess by now the search strategy is something like go in the order of the variables and split the domain of the variables by picking the smallest value available and put the rest of the domain in another node, this in minizinc search annotations are called input_order and indomain_min.

Now we fast forward the search to the node labeled 4.

4: Prelude to a solution: are we there yet?

Here you can see that queens[8] = 1 (remains the same), queens[7] = 5 while in the node 2 it was queens[7] = 3, that means that all the possibilities where queens[8] = 1 and queens[7] = [3..4] where either evaluated or pruned, but all lead to failures.
queens = array1d(1..8, [{2,4,6..7}, {2..3,6}, {2..4,7}, {3..4,7}, {2,6}, 8, 5, 1]);

Then this node spited into queens[6] = 2 (left child) which lead to more failures and queens[6] = 6 (right child)

5: We struck gold: a feasible Solution !!

queens[2] = 6 propagated, and the result satisfied all the constraints, so we have a solution and we stop the search. queens = array1d(1..8, [4, 2, 7, 3, 6, 8, 5, 1]);

Pruning

Arriving to the solution only required 47 nodes of the gigantic Whole Search Tree, the area inside the blue line is the search tree is the Search Tree where nodes labeled 0,1,2,3,4,5 are, it is gigantic even pruned for this relatively small instance of 8 decision variables of cardinality 8 with a global constraint which certainly reduces the span of the search tree by a lot since it communicates the domains of the variables between each other much more effectively than the constraint store of the solver. The whole search tree only has 723 nodes in total (nodes and leafs) where only 362 are leafs, while the brute force search could generate all the possible 8^8 leaf nodes directly (again, it might not, but it could), thats a search space of 16.777.216 possibilities (its like 8 octal digits since its 8 variables with cardinality of domain 8), it is a big saving when you compare it, of the 16.777.216 to the solver only 362 made some sense, and 92 where feasible, its less than 0.0001% of the combinations of the whole search space you would face by, for example, generating at random a solution by generating 8 random digits in the range [1..8] and evaluating its feasibility afterwards, talk about a needle in a haystack.

Pruning basically means to reduce the search space, anything better than the evaluation of ALL the combinations, even by removing one single possibility is considered a pruned search space. Since this was a satisfaction problem rather than an optimization one, the pruning is just to remove unfeasible values from the domain of the variables.
In the optimization problems there are two types of pruning, the satisfaction pruning like before, eliminating imposible solutions, and the pruning done by the bounds of the objective function, when the bounds of the objective function can be determined before all the variables reached a value and be it is determined to be "worst" than the current "best" value found so far (i.e. in a minimization optimization the smallest value the objective could take in a branch is larger than the smallest value found so far in a feasible solution) you can prune that branch, which surely contains feasible (but not as good) solutions as well as unfeasible solutions, and save some work, also you still have to prune or evaluate all the tree if you want to find the optimal solution and prove that it is optimal.

To explore search trees like the ones of the images you can run your code with the gecode-gist solver in the minizinc IDE, or use minizinc --Solver gecode-gist <modelFile> <dataFile> in the command line, upon double clicking on one of the nodes you will see the state of the decision variables just like the ones in this post.

And even further use solve :: int_search( pos, varChoise, valChoise, complete) satisfy; to test this different search strategies

% variable selections:
ann : varChoise
%          = input_order
%          = first_fail
%            = smallest
%          = largest
;

% value selections:
ann : valChoise
%          = indomain_min
%          = indomain_max
%          = indomain_median
%          = indomain_random
%          = indomain_split
%            = indomain_reverse_split
;

just paste this in you model and uncomment one varChoise annotation and one valChoise to test that combination of variable selection and value selection, and see if one strategy finds the solution with less failures, less nodes, or less propagations. You can read more about them in the minizinc documentation.

like image 186
Matias Agelvis Avatar answered May 16 '23 08:05

Matias Agelvis