I am running a constrained optimization problem with about 1500 variables and it is taking over 30 minutes to run....
If I reduce the tolerance to 1 the minimization will complete in about five minutes, but that doesn't seem like a good way to speed things up.
from scipy.optimize import minimize
results = minimize(objFun, initialVals, method='SLSQP', bounds = bnds, constraints=cons, tol = toler)
print(results)
fun: -868.72033130318198
jac: array([ 0., 0., 0., ..., 0., 0., 0.])
message: 'Optimization terminated successfully.'
nfev: 1459
nit: 1
njev: 1
status: 0
success: True
x: array([ 0., 0., 0., ..., 1., 1., 1.])
Any suggestions would be appreciated.
optimize. minimize can be terminated by using tol and maxiter (maxfev also for some optimization methods). There are also some method-specific terminators like xtol, ftol, gtol, etc., as mentioned on scipy.
The answer is yes.
NumPy/SciPy's functions are usually optimized for multithreading. Did you look at your CPU utilization to confirm that only one core is being used while the simulation is being ran? Otherwise you have nothing to gain from running multiple instances.
Your tolerance should be set to whatever tolerance you need. Setting it higher just tells the optimiser to stop sooner and doesn't actually speed it up. That being said, allowing it to go to a greater tollerence might be a waste of your time if not needed.
Possible ways to reduce the time required are as follows:
As you are using finite difference, you need (1 + the number of design variables) evaluations of your objective function to get the total sensitivity.
As ev-br said, if you can find the analytical solution to the jacobian then this isn't needed. Based on the fact you have 1500 design variables. Im guessing this isnt easy, though if your objective function allows, automatic differentiation might be an option. Iv had some experience with AlgoPy which you could look at.
Due to the high number of objective function evaluations, this may be the easiest approach. Once again, see ev-br's answer for things like compiling using cython, and general reducing complexity. You could try running parts of the code using timeit so see if changes are beneficial.
Reducing the number of design variables linearly lowers the objective function calls needed for the finite difference. Do all your variables change significantly? Could some be fixed at a set value? Can you derive some as a function of others?
Depending on your problem, you may be able to select a better starting point that will mean your optimiser is 'closer' to the final solution. Depending on your problem, you may also be able to 'restart' your optimisation from a previous result.
The finite difference evaluations don't have to be done in order so you could write your own finite difference function and then run the calls in parallel using the multiprocessing class. The effectiveness of this is based on your system and number of cores available.
Here's what I'd do:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With