I'm using scipy.optimize.fmin_bfgs(f, init_theta, fprime)
to minimize f
, which has gradient fprime
. I compute f
and fprime
in a single function because most of the computation is the same so there's no need to do it twice.
Is there any way to call fmin_bfgs()
specifying a single function that returns both f
and fprime
?
SciPy optimize provides functions for minimizing (or maximizing) objective functions, possibly subject to constraints. It includes solvers for nonlinear problems (with support for both local and global optimization algorithms), linear programing, constrained and nonlinear least-squares, root finding, and curve fitting.
The scipy. optimize package provides several commonly used optimization algorithms. A detailed listing is available: scipy. optimize (can also be found by help(scipy.
The answer is yes.
If you're trying to save on computation time rather than just combine the calculation of f
and f'
for code convenience, it seems like you need an extra wrapper around your function to cache values, since fmin_bfgs
doesn't seem to allow you to pass such a function (unlike some other optimization functions).
Here's one way to do that, maintaining the 10 most recently evaluated points in a little cache. (I'm not sure whether calls to this function need to be thread-safe: probably not, but if so, you'll probably need to add some locking in here, I guess.)
def func_wrapper(f, cache_size=10):
evals = {}
last_points = collections.deque()
def get(pt, which):
s = pt.tostring() # get binary string of numpy array, to make it hashable
if s not in evals:
evals[s] = f(pt)
last_points.append(s)
if len(last_points) >= cache_size:
del evals[last_points.popleft()]
return evals[s][which]
return functools.partial(get, which=0), functools.partial(get, which=1)
If we then do
>>> def f(x):
... print "evaluating", x
... return (x-3)**2, 2*(x-3)
>>> f_, fprime = func_wrapper(f)
>>> optimize.fmin_bfgs(f_, 1000, fprime)
evaluating [ 994.93480441]
evaluating [ 974.67402207]
evaluating [ 893.63089268]
evaluating [ 665.93446894]
evaluating [ 126.99931561]
evaluating [ 3.]
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 4
Function evaluations: 7
Gradient evaluations: 7
array([ 3.])
we can see that we don't repeat any evaluations.
Suppose you have a Python function f(x)
that returns both the function value and the gradient:
In [20]: def f(x):
....: return (x-3)**2, 2*(x-3)
Then just pass the outputs separately like so:
In [21]: optimize.fmin_bfgs(lambda x: f(x)[0], 1000, lambda x: f(x)[1])
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 4
Function evaluations: 7
Gradient evaluations: 7
Out[21]: array([ 3.])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With