Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Gurobi model modification slow, can I modify the constraint matrix directly?

I want to make changes to the coefficients in an existing model. Currently (with the Python API) I'm looping through the constraints and calling model.chgCoeff but it's quite slow. Is there a faster way, perhaps accessing the constraint matrix directly, in the Python and/or C API?

Example code below. The reason it's slow seems to be mostly because of the loop itself; replacing chgCoeff with any other operation is still slow. Normally I would get around this by using vector operations rather than for loops, but without access to the constraint matrix I don't think I can do that.

from __future__ import division
import gurobipy as gp
import numpy as np
import time
N = 300
M = 2000
m = gp.Model()
m.setParam('OutputFlag', False)

masks = [np.random.rand(N) for i in range(M)]
p = 1/np.random.rand(N)
rets = [p * masks[i] - 1 for i in range(M)]
v = np.random.rand(N)*10000 * np.round(np.random.rand(N))

t = m.addVar()
x = [m.addVar(vtype=gp.GRB.SEMICONT, lb=1000, ub=v[i]) for i in range(N)]
m.update()
cons = [m.addConstr(t <= gp.LinExpr(ret, x)) for ret in rets]

m.setObjective(t, gp.GRB.MAXIMIZE)
m.update()

start_time = time.time()
m.optimize()
solve_ms = int(((time.time() - start_time)*1000))

print('First solve took %s ms' % solve_ms)

p = 1/np.random.rand(N)
rets = [p * masks[i] - 1 for i in range(M)]
start_time = time.time()
for i in range(M):
    for j in range(N):
        if rets[i][j] != -1:
            m.chgCoeff(cons[i], x[j], -rets[i][j])
m.update()
update_ms = int(((time.time() - start_time)*1000))
print('Model update took %s ms' % update_ms)

start_time = time.time()
m.optimize()
solve_ms = int(((time.time() - start_time)*1000))
print('Second solve took %s ms' % solve_ms)

k = 2
start_time = time.time()
for i in range(M):
    for j in range(N):
        if rets[i][j] != -1:
            k *= rets[i][j]
solve_ms = int(((time.time() - start_time)*1000))
print('Plain loop took %s ms' % solve_ms)

R = np.array(rets)
start_time = time.time()
S = np.copy(R)
copy_ms = int(((time.time() - start_time)*1000))
print('np.copy() took %s ms' % copy_ms)

Output:

First solve took 1767 ms
Model update took 2051 ms
Second solve took 1872 ms
Plain loop took 1103 ms
np.copy() took 3 ms

A call to np.copy on the size (2000, 300) constraint matrix takes 3ms. Is there a fundamental reason I'm missing that the whole model update can't be that fast?

like image 784
akxlr Avatar asked Sep 26 '22 08:09

akxlr


1 Answers

You can't access the constraint matrix directly in Gurobi using the Python interface. Even if you could, you couldn't do an np.copy operation because the matrix is in the CSR format, not a dense format. To make the sort of wholesale changes to a constraint, it is better to modify the constraint matrix by removing the constraint and adding a new one. In your case, the changes to each are so significant that you aren't going to get much benefit from a warm start, so you won't loose anything by not keeping the same constraint objects.

Assuming you adjust the rets array in the code above for the -1 special case, the following code will do want you want and be much faster.

for con in cons:               
    m.remove(con)
new_cons = [m.addConstr(t <= gp.LinExpr(ret, x)) for ret in rets]
like image 173
David Nehme Avatar answered Sep 30 '22 08:09

David Nehme