Has anybody ever encountered problems with fmin_slsqp (or anything else in scipy.optimize) only when using very large or very small numbers?
I am working on some python code to take a grayscale image and a mask, generate a histogram, then fit multiple gaussians to the histogram. To develop the code I used a small sample image, and after some work the code was working brilliantly. However, when I normalize the histogram first, generating bin values <<1, or when I histogram huge images, generating bin values in the hundreds of thousands, fmin_slsqp() starts failing sporadically. It quits after only ~5 iterations, usually just returning a slightly modified version of the initial guess I gave it, and returns exit mode 8, which means "Positive directional derivative for linesearch." If I check the size of the bin counts at the beginning and scale them into the neighborhood of ~100-1000, fmin_slsqp() works as usual. I just un-scale things before returning the results. I guess I could leave it like that, but it feels like a hack.
I have looked around and found folks talking about the epsilon value, which is basically the dx used for approximating derivatives, but tweaking that has not helped. Other than that I haven't found anything useful yet. Any ideas would be greatly appreciated. Thanks in advance.
james
optimize. minimize can be terminated by using tol and maxiter (maxfev also for some optimization methods). There are also some method-specific terminators like xtol, ftol, gtol, etc., as mentioned on scipy.
SciPy optimize provides functions for minimizing (or maximizing) objective functions, possibly subject to constraints. It includes solvers for nonlinear problems (with support for both local and global optimization algorithms), linear programing, constrained and nonlinear least-squares, root finding, and curve fitting.
NumPy/SciPy's functions are usually optimized for multithreading. Did you look at your CPU utilization to confirm that only one core is being used while the simulation is being ran? Otherwise you have nothing to gain from running multiple instances.
The SciPy Optimize library provides a set of functions to minimize (or maximize) objective functions.
I've had similar problems optimize.leastsq. The data I need to deal with often are very small, like 1e-18 and such, and I noticed that leastsq doesn't converge to best fit parameters in those cases. Only when I scale the data to something more common (like in hundreds, thousands, etc., something you can maintain resolution and dynamic range with integers), I can let leastsq converge to something very reasonable.
I've been trying around with those optional tolerance parameters so that I don't have to scale data before optimizing, but haven't had much luck with it...
Does anyone know a good general approach to avoid this problem with the functions in the scipy.optimize package? I'd appreciate you could share... I think the root is the same problem with the OP's.
Are you updating your initial guess ("x0") when your underlying data changes scale dramatically? for any iterative linear optimization problem, these problems will occur if your initial guess is far from the data you're trying to fit. It's more of a optimization problem than a scipy problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With