Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to quickly get a feasible solution to a linear program in Python?

Goal: Compute the intersection of two convex polytopes.

I am using scipy.spatial.HalfspaceIntersection to do this. The following image shows the resultant intersection: rviz-screenshot

My problem: Determine an initial feasible point.

You see, the current Python implementation of scipy.spatial.HalfspaceIntersection requires an interior_point to be passed as an argument.

interior_point : ndarray of floats, shape (ndim,)
Point clearly inside the region defined by halfspaces. Also called a feasible point, it can be obtained by linear programming.

Now, at the moment, I am manually supplying the feasible point because I was just drafting a prototype to experiment with HalfspaceIntersection. But now I have reached a point where I do not want to manually specify it.

SciPy's optimization module scipy.optimize.linprog implements two general Linear Programming (LP) solvers: simplex, and interior-point. However, they seem to require a cost function. [1]

Since I want to spend the least amount of processing time as possible computing this feasible point, I was wondering how I can run any of these LP methods without a cost function, i.e., only run until the solution has reached a feasible status.

Questions:

  1. Is scipy.optimize.linprog the right way to go for computing this feasible interior point?

  2. If yes, how can I use either simplex or interior-point without a cost function?

  3. Why does scipy.spatial.HalfspaceIntersection require an interior point to be passed as an argument in the first place? To the best of my understanding, an intersection of halfspaces is the removal of the redundant inequalities of a given set of inequalities. Why is a feasible point required for this?

like image 672
Henrique Ferrolho Avatar asked Jul 30 '18 16:07

Henrique Ferrolho


People also ask

How do you find the feasible solution in linear programming?

In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a corner of the polyhedron of feasible solutions. If there exists an optimal solution, then there exists an optimal BFS.


1 Answers

You can specify a constant cost function, like for example 0.

Here is an example:

%pylab
from scipy.optimize import linprog
A = array([[1] + 9 * [0], 9 * [0] + [1]])
b = array([1, 1])

Measuring the performance of this approach reveals that it is very efficient:

%time
res = linprog(c=zeros(A.shape[1]), A_eq=A, b_eq=b)

Output:

CPU times: user 5 µs, sys: 1 µs, total: 6 µs
Wall time: 11 µs

Moreover, according to res.nit, we were done after only 2 iterations.

The result res.x is correct:

array([ 1.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.])

Note that the simplex algorithm is designed to find the vertices of the polyhedron defined by the linear constraints. To my understanding, interior point-based methods do not have such a preference, although I'm not familar with the implementation behind scipy's linprog. Hence, since your requirement is that the point is "clearly inside the region defined by halfspaces", I suggest either one of the following:

  • Either, pass method='interior-point' to linprog.
  • Or, compute different vertices and exploit that the polyhedron is convex:
    1. Add some noise to the constant objective function (e.g., via np.random.randn)
    2. Solve multiple instances of the noise-augmented LP by varying the noise seed (np.random.seed).
    3. Finally, use the mean of your solution as the final interior point "clearly inside the region".

Since it is not clear, how large the margin of your interior point needs to be, I would expect that the second approach (noise-augmented LP) is more robust.

like image 94
theV0ID Avatar answered Sep 24 '22 18:09

theV0ID