Goal: Compute the intersection of two convex polytopes.
I am using scipy.spatial.HalfspaceIntersection
to do this. The following image shows the resultant intersection:
My problem: Determine an initial feasible point.
You see, the current Python
implementation of scipy.spatial.HalfspaceIntersection
requires an interior_point
to be passed as an argument.
interior_point : ndarray of floats, shape (ndim,)
Point clearly inside the region defined by halfspaces. Also called a feasible point, it can be obtained by linear programming.
Now, at the moment, I am manually supplying the feasible point because I was just drafting a prototype to experiment with HalfspaceIntersection
.
But now I have reached a point where I do not want to manually specify it.
SciPy's optimization module scipy.optimize.linprog
implements two general Linear Programming (LP) solvers: simplex, and interior-point. However, they seem to require a cost function. [1]
Since I want to spend the least amount of processing time as possible computing this feasible point, I was wondering how I can run any of these LP methods without a cost function, i.e., only run until the solution has reached a feasible status.
Questions:
Is scipy.optimize.linprog
the right way to go for computing this feasible interior point?
If yes, how can I use either simplex or interior-point without a cost function?
Why does scipy.spatial.HalfspaceIntersection
require an interior point
to be passed as an argument in the first place? To the best of my understanding, an intersection of halfspaces is the removal of the redundant inequalities of a given set of inequalities. Why is a feasible point required for this?
In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a corner of the polyhedron of feasible solutions. If there exists an optimal solution, then there exists an optimal BFS.
You can specify a constant cost function, like for example 0.
Here is an example:
%pylab
from scipy.optimize import linprog
A = array([[1] + 9 * [0], 9 * [0] + [1]])
b = array([1, 1])
Measuring the performance of this approach reveals that it is very efficient:
%time
res = linprog(c=zeros(A.shape[1]), A_eq=A, b_eq=b)
Output:
CPU times: user 5 µs, sys: 1 µs, total: 6 µs
Wall time: 11 µs
Moreover, according to res.nit
, we were done after only 2 iterations.
The result res.x
is correct:
array([ 1., 0., 0., 0., 0., 0., 0., 0., 0., 1.])
Note that the simplex algorithm is designed to find the vertices of the polyhedron defined by the linear constraints. To my understanding, interior point-based methods do not have such a preference, although I'm not familar with the implementation behind scipy's linprog
. Hence, since your requirement is that the point is "clearly inside the region defined by halfspaces", I suggest either one of the following:
method='interior-point'
to linprog
.np.random.randn
)np.random.seed
).Since it is not clear, how large the margin of your interior point needs to be, I would expect that the second approach (noise-augmented LP) is more robust.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With