Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Resuming an optimization in scipy.optimize?

scipy.optimize presents many different methods for local and global optimization of multivariate systems. However, I have a very long optimization run needed that may be interrupted (and in some cases I may want to interrupt it deliberately). Is there any way to restart... well, any of them? I mean, clearly one can provide the last, most optimized set of parameters found as the initial guess, but that's not the only parameter in play - for example, there are also gradients (jacobians, for example), populations in differential evolution, etc. I obviously don't want these to have to start over as well.

I see little way to prove these to scipy, nor to save its state. For functions that take a jacobian for example, there is a jacobian argument ("jac"), but it's either a boolean (indicating that your evaluation function returns a jacobian, which mine doesn't), or a callable function (I would only have the single result from the last run to provide). Nothing takes just an array of the last jacobian available. And with differential evolution, loss of the population would be horrible for performance and convergence.

Are there any solutions to this? Any way to resume optimizations at all?

like image 977
KarenRei Avatar asked Oct 12 '16 15:10

KarenRei


People also ask

How do I stop SciPy optimizing minimize?

optimize. minimize can be terminated by using tol and maxiter (maxfev also for some optimization methods). There are also some method-specific terminators like xtol, ftol, gtol, etc., as mentioned on scipy.

How does SciPy optimize work?

SciPy optimize provides functions for minimizing (or maximizing) objective functions, possibly subject to constraints. It includes solvers for nonlinear problems (with support for both local and global optimization algorithms), linear programing, constrained and nonlinear least-squares, root finding, and curve fitting.

What is JAC in SciPy optimize?

jac : bool or callable, optional Jacobian (gradient) of objective function. Only for CG, BFGS, Newton-CG, L-BFGS-B, TNC, SLSQP, dogleg, trust-ncg. If jac is a Boolean and is True, fun is assumed to return the gradient along with the objective function. If False, the gradient will be estimated numerically.


1 Answers

The general answer is no, there's no general solution apart from, just as you say, starting from the last estimate from the previous run.

For differential evolution specifically though, you can can instantiate the DifferentialEvolutionSolver, which you can pickle at checkpoint and unpickle to resume. (The suggestion comes from https://github.com/scipy/scipy/issues/6517)

like image 127
ev-br Avatar answered Oct 13 '22 02:10

ev-br