Is there a way to make the scipy optimization modules use a smaller step size? I am optimizing a problem with a large set of variables (approximately 40) that I believe are near the optimal value, however when I run the scipy minimization modules (so far I have tried L-BFGS and CG) they do not converge because the initial step size is too large.
I have a similar issue and I use the SLSQP because I need boundaries. With this solver the option epsilon ('eps') helps to converge with a better step size:
minimize(simulation, start_values, method='SLSQP', bounds = bnds, constraints = ({'type': 'ineq', 'fun': lambda x: const(x)}), options = {'eps': 1})
This is not the ideal solution.
I believe cobyla is the only technique that supports this in scipy.optimize.minimize. You can essentially control how big it's steps are with the rhobeg parameter. (It's not really the step size since it's a sequential linear method, but it has the same effect).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With