Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scipy.optimize minimize is taking too long

I am running a constrained optimization problem with about 1500 variables and it is taking over 30 minutes to run....

If I reduce the tolerance to 1 the minimization will complete in about five minutes, but that doesn't seem like a good way to speed things up.

from scipy.optimize import minimize

results = minimize(objFun, initialVals, method='SLSQP', bounds = bnds, constraints=cons, tol = toler)

print(results)

fun: -868.72033130318198
jac: array([ 0.,  0.,  0., ...,  0.,  0.,  0.])
message: 'Optimization terminated successfully.'
nfev: 1459
nit: 1
njev: 1
status: 0
success: True
x: array([ 0.,  0.,  0., ...,  1.,  1.,  1.])

Any suggestions would be appreciated.

like image 504
RussellB. Avatar asked Jul 29 '16 00:07

RussellB.


People also ask

How do I stop SciPy optimizing minimize?

optimize. minimize can be terminated by using tol and maxiter (maxfev also for some optimization methods). There are also some method-specific terminators like xtol, ftol, gtol, etc., as mentioned on scipy.

Is SciPy minimize deterministic?

The answer is yes.

Is SciPy optimize multithreaded?

NumPy/SciPy's functions are usually optimized for multithreading. Did you look at your CPU utilization to confirm that only one core is being used while the simulation is being ran? Otherwise you have nothing to gain from running multiple instances.


2 Answers

Your tolerance should be set to whatever tolerance you need. Setting it higher just tells the optimiser to stop sooner and doesn't actually speed it up. That being said, allowing it to go to a greater tollerence might be a waste of your time if not needed.

Possible ways to reduce the time required are as follows:

  • Use a different optimiser
  • Use a different gradient finding method
  • Speed up your objective function
  • Reduce the number of design variables
  • Choose a better initial guess
  • Use parallel processing

Gradient methods

As you are using finite difference, you need (1 + the number of design variables) evaluations of your objective function to get the total sensitivity.

As ev-br said, if you can find the analytical solution to the jacobian then this isn't needed. Based on the fact you have 1500 design variables. Im guessing this isnt easy, though if your objective function allows, automatic differentiation might be an option. Iv had some experience with AlgoPy which you could look at.

Objective function speed

Due to the high number of objective function evaluations, this may be the easiest approach. Once again, see ev-br's answer for things like compiling using cython, and general reducing complexity. You could try running parts of the code using timeit so see if changes are beneficial.

Design variables

Reducing the number of design variables linearly lowers the objective function calls needed for the finite difference. Do all your variables change significantly? Could some be fixed at a set value? Can you derive some as a function of others?

Initial Guess

Depending on your problem, you may be able to select a better starting point that will mean your optimiser is 'closer' to the final solution. Depending on your problem, you may also be able to 'restart' your optimisation from a previous result.

Parallelisation

The finite difference evaluations don't have to be done in order so you could write your own finite difference function and then run the calls in parallel using the multiprocessing class. The effectiveness of this is based on your system and number of cores available.

like image 78
Wokpak Avatar answered Oct 12 '22 13:10

Wokpak


Here's what I'd do:

  • profile the minimization. From your output it seems that evaluating the function is the bottleneck. Check if it's so. If it is, then:
  • see if you can compute the jacobian with paper and pencil or a CAS system. Use it instead of finite differences.
  • see if you can speed up the function itself (mathematical simplifications, numpy vectorization, cython)
like image 27
ev-br Avatar answered Oct 12 '22 15:10

ev-br