Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How Can an Elastic SubProblem in PuLP be used as a Constraint?

In Python PuLP, a linear programming constraint can be turned into an elastic subproblem.

http://www.coin-or.org/PuLP/pulp.html?highlight=lpsum#elastic-constraints

Solving the subproblem optimizes the distance from the target value.

Of course, the target value is the optimal solution to this subproblem, but the whole point of elasticizing is that we believe this solution may be infeasible.

How can the subproblem be incorporated into the overall problem? I tried adding it to the problem the way you'd add a constrained, and this threw a type error. I tried putting it in the objective function and this did not work either.

I can't find anything in the documentation above or the examples hosted here:

https://code.google.com/p/pulp-or/wiki/OptimisationWithPuLP?tm=6

Here is the subproblem I formulated:

capacity = LpConstraint(e=lpSum([ x[m][n] * len(n.items) for n in N ]),
    sense=-1, rhs=30, name=str(random.random()))
stretch_proportion = 30/50
elasticCapacity = capacity.makeElasticSubProblem(penalty=50,
    proportionFreeBoundList=[1,stretch_proportion])

Here is the closest thing I think I have to incorporating it into the LP Objective:

def sub(m):
    capacity = LpConstraint(e=lpSum([ x[m][n] * len(n.items) for n in N ]),
        sense=-1, rhs=30, name=str(random.random()))
    stretch_proportion = 30/50
    elasticCapacity = capacity.makeElasticSubProblem(penalty=50,
        proportionFreeBoundList=[1,stretch_proportion])
    elasticCapacity.solve()
    return elasticCapacity.isViolated()

...

prob += lpSum( [ x[m][n] * reduce(op.add, map(D2, [i.l for i in n.items], [j.l for j in n.items]))\
    for n in N for m in M ] ) + 50 * sub(m)
like image 603
Bret Fontecchio Avatar asked Dec 03 '14 18:12

Bret Fontecchio


People also ask

What is elastic constraint?

A constraint C ( x ) = c (equality may be replaced by or ) can be elasticized to the form. C ( x ) ∈ D. where denotes some interval containing the value .

What is lpSum in pulp?

pulp.lpSum(vector) Calculate the sum of a list of linear expressions. Parameter: vector – A list of linear expressions.

What does the function LpVariable do in Python pulp?

Using LpVariable. The indexes parameter is a Python list of strings. This list will be used as the keys to the dictionary returned by the method. The next two inputs set the lower and upper bounds of the variable. Finally, the cat input, as before, categorizes the variable as either an integer, binary, or continuous.


1 Answers

Here is the short answer, in the form of a sketch of a working example:

Create the problem, and add hard constraints and the objective.

prob = LpProblem("My Problem", LpMinimize)
....

After you've done that, define the soft (elastic) constraint and add it to the problem with pulp.prob.extend(), like so:

c_e_LHS = LpAffineExpression([(var1,coeff1), (var2,coeff2)])   # example left-hand-side expression
c_e_RHS = 30   # example right-hand-side value
c_e_pre = LpConstraint(e=el_constr_LHS, sense=-1, name='pre-elastic', rhs=c_e_RHS)   # The constraint LHS = RHS
c_e = c_e_pre.makeElasticSubProblem(penalty=100, proportionFreeBoundList=[.02,.02])   # The definition of the elasticized constraint 
prob.extend(c_e)   # Adding the constraint to the problem

At this point the problem has been modified to include the soft (elastic) constraint, and you can solve it. $\qed$.

Here is a longer answer: this question is answered in the pulp-or-discuss Google group under adding an elastic constraint. I created the following example, for my own purposes, based on that discussion and on the longer formulation of the blending problem on the PuLP documentation website.

First you create the problem:

from pulp import *
prob = LpProblem("The Whiskas Problem", LpMinimize)

Create a list of the Ingredients:

Ingredients = ['CHICKEN', 'BEEF', 'MUTTON', 'RICE', 'WHEAT', 'GEL']

A dictionary of the costs of each of the Ingredients is created:

costs = {'CHICKEN': 0.013, 
         'BEEF': 0.008, 
         'MUTTON': 0.010, 
         'RICE': 0.002, 
         'WHEAT': 0.005, 
         'GEL': 0.001}

A dictionary of the protein percent in each of the Ingredients is created:

proteinPercent = {'CHICKEN': 0.100, 
                  'BEEF': 0.200, 
                  'MUTTON': 0.150, 
                  'RICE': 0.000, 
                  'WHEAT': 0.040, 
                  'GEL': 0.000}

A dictionary of the fat percent in each of the Ingredients is created:

fatPercent = {'CHICKEN': 0.080, 
              'BEEF': 0.100, 
              'MUTTON': 0.110, 
              'RICE': 0.010, 
              'WHEAT': 0.010, 
              'GEL': 0.000}

A dictionary of the fibre percent in each of the Ingredients is created:

fibrePercent = {'CHICKEN': 0.001, 
                'BEEF': 0.005, 
                'MUTTON': 0.003, 
                'RICE': 0.100, 
                'WHEAT': 0.150, 
                'GEL': 0.000}

A dictionary of the salt percent in each of the Ingredients is created:

saltPercent = {'CHICKEN': 0.002, 
               'BEEF': 0.005, 
               'MUTTON': 0.007, 
               'RICE': 0.002, 
               'WHEAT': 0.008, 
               'GEL': 0.000}

Create the 'prob' variable to contain the problem data:

prob = LpProblem("The Whiskas Problem", LpMinimize)

A dictionary called 'ingredient_vars' is created to contain the referenced Variables:

ingredient_vars = LpVariable.dicts("Ingr",Ingredients,0)

Add the objective:

prob += lpSum([costs[i]*ingredient_vars[i] for i in Ingredients]), "Total Cost of Ingredients per can"

Create the hard constraints (here is where my example starts to deviate from the one in the documentation):

c1 = lpSum([ingredient_vars[i] for i in Ingredients]) == 100, "PercentagesSum"
c2 = lpSum([proteinPercent[i] * ingredient_vars[i] for i in Ingredients]) >= 8.0, "ProteinRequirement"
c3 = lpSum([fatPercent[i] * ingredient_vars[i] for i in Ingredients]) >= 6.0, "FatRequirement"
c4 = lpSum([fibrePercent[i] * ingredient_vars[i] for i in Ingredients]) <= 2.0, "FibreRequirement"
c5 = lpSum([saltPercent[i] * ingredient_vars[i] for i in Ingredients]) <= 0.4, "SaltRequirement"

Add the hard constraints:

for con in [c1,c2,c3,c4,c5]:
    prob += con

Define the left-hand side expression of constraint to be elasticicized:

c6_LHS = LpAffineExpression([(ingredient_vars['GEL'],1), (ingredient_vars['BEEF'],1)])

Define the constraint to be elasticized: Gel and Beef total less than 30%:

c6= LpConstraint(e=c6_LHS, sense=-1, name='GelBeefTotal', rhs=30)

Define the elasticized constraint:

c6_elastic = c6.makeElasticSubProblem(penalty = 100, proportionFreeBoundList = [.02,.02])

And this is the way to add an elastic (i.e. soft) constraint to the problem:

prob.extend(c6_elastic)

Solve the problem:

prob.writeLP("WhiskasModel.lp")
prob.solve()

Output the optimal solution:

for v in prob.variables():
    print v.name, "=", v.varValue

If you play around with the penalty and bounds, you can verify that it works as advertised.

PS, my understanding is that the title of the question may be misleading. Adding an elastic subproblem amounts to adding some cost terms to the objective, corresponding to a "soft constraint." A soft constraint is not a constraint -- it's a shorthand for a set of cost terms in the objective.

like image 67
GrayOnGray Avatar answered Oct 23 '22 16:10

GrayOnGray