I've read that integer programming is either very tricky or not possible with SciPy and that I probably need to use something like zibopt to do it in Python . But I really thought I could do it by creating one "is binary" constraint for each element in a vector being optimized by SciPy.
To do that, I utilized the closure trick from http://docs.python-guide.org/en/latest/writing/gotchas/#late-binding-closures and created one constraint function for each element, like so:
def get_binary_constraints(vector, indices_to_make_binary=None):
indices_to_make_binary = indices_to_make_binary or range(len(vector))
for i in indices_to_make_binary:
def ith_element_is_binary(vector, index=i):
return vector[index] == 0 or vector[index] == 1
yield ith_element_is_binary
test_vector = scipy.array([0.5, 1, 3])
constraints = list(get_binary_constraints(test_vector))
for constraint in constraints:
print constraint(test_vector)
which prints:
False
True
False
Then I modified get_binary_constraints for fmin_cobyla, whose constraints are a "sequence of functions that all must be >=0".
def get_binary_constraints(vector, indices_to_make_binary=None):
indices_to_make_binary = indices_to_make_binary or range(len(vector))
for i in indices_to_make_binary:
def ith_element_is_binary(vector, index=i):
return int(vector[index] == 0 or vector[index] == 1) - 1
yield ith_element_is_binary
which prints the following for the same test vector [0.5, 1, 3]:
-1
0
-1
So, only the 2nd value in the array will meet the condition >= 0.
Then, I set up a very simple optimization problem as follows:
from scipy import optimize
import scipy
def get_binary_constraints(vector, indices_to_make_binary=None):
indices_to_make_binary = indices_to_make_binary or range(len(vector))
for i in indices_to_make_binary:
def ith_element_is_binary(vector, index=i):
return int(vector[index] == 0 or vector[index] == 1) - 1
yield ith_element_is_binary
def objective_function(vector):
return scipy.sum(vector)
def main():
guess_vector = scipy.zeros(3)
constraints = list(get_binary_constraints(guess_vector))
result = optimize.fmin_cobyla(objective_function, guess_vector, constraints)
print result
if __name__ == '__main__':
main()
And this is what I get:
Return from subroutine COBYLA because the MAXFUN limit has been reached.
NFVALS = 1000 F =-8.614066E+02 MAXCV = 1.000000E+00
X =-2.863657E+02 -2.875204E+02 -2.875204E+02
[-286.36573349 -287.52043407 -287.52043407]
Before I go use R's LPSolve package or install zipobt for this, I'd really like to see if I can just use SciPy.
Am I doing something wrong, or is this just not possible to do in SciPy?
The problem is that, as unintuitive as it may seem, Integer Programming is a fundamentally more difficult problem than Linear Programming with real numbers. Someone in the SO thread you linked to mentions that SciPy uses the Simplex algorithm. The algorithm doesn't work for integer programming. You have to use a different algorithm.
If you do find a way to use Simplex to solve integer programming efficiently, you've solved the P=NP problem, which is worth US$1,000,000 to the first person to solve.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With