I have a optimization problem that I need to solve in python. The general structure is
def foo(a, b, c, d, e):
# do something and return one value
def bar(a, b, c, d, e, f, g, h, i, j):
# do something and return one value
def func():
return foo(a, b, c, d, e) - bar(a, b, c, d, e, f, g, h, i, j)
I would like to use least_squares
minimization and return the values for f, g, h, i and j
as a list where the square difference is the minimum between foo
and bar
. I'm not sure how to use least_squares
for this.
I've tried this:
# Initial values f, g, h, i, j
x0 =[0.5,0.5,0.5,0.05,0.5]
# Constraints
lb = [0,0,0,0,-0.9]
ub = [1, 100, 1, 0.5, 0.9]
x = least_squares(func, x0, lb, ub)
How do I get x
to be the returned value of the list of f, g, h, i and j
minimum values?
The way you currently define your problem is equivalent to maximizing bar
(assuming you pass func
to a minimization function). As you don't vary the parameters a
to e
, func
basically is the difference between a constant and the outcome of bar
that can be tuned; due to the negative sign, it will be tried to be maximized as that would then minimize the entire function.
I think what you actually want to minimize is the absolute value or squared difference between the two functions. I illustrate that using a simple example where I assume that the functions just return the sum of the parameters:
from scipy.optimize import minimize
def foo(a, b, c, d, e):
# do something and return one value
return a + b + c + d + e
def bar(a, b, c, d, e, f, g, h, i, j):
# do something and return one value
return a + b + c + d + e + f + g + h + i + j
def func1(x):
# your definition, the total difference
return foo(x[0], x[1], x[2], x[3], x[4]) - bar(x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9])
def func2(x):
# quadratic difference
return (foo(x[0], x[1], x[2], x[3], x[4]) - bar(x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9]))**2
# Initial values for all variables
x0 = (0, 0, 0, 0, 0, 0.5, 0.5, 0.5, 0.05, 0.5)
# Constraints
# lb = [0,0,0,0,-0.9]
# ub = [1, 100, 1, 0.5, 0.9]
# for illustration, a, b, c, d, e are fixed to 0; that should of course be changed
bnds = ((0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0., 1), (0., 100.), (0, 1), (0, 0.5), (-0.9, 0.9))
res1 = minimize(func1, x0, method='SLSQP', bounds=bnds)
res2 = minimize(func2, x0, method='SLSQP', bounds=bnds)
Then you get:
print res1.x
array([ 0. , 0. , 0. , 0. , 0. , 1. , 100. , 1. ,
0.5, 0.9])
and
print res1.fun
-103.4
As explained above, all the parameters will go to the upper bound to maximize bar
which minimizes func
.
For the adapted function func2
, you receive:
res2.fun
5.7408853312979541e-19 # which is basically 0
res2.x
array([ 0. , 0. , 0. , 0. , 0. ,
0.15254237, 0.15254237, 0.15254237, 0.01525424, -0.47288136])
So, as expected, for this simple case one can choose the parameters in a way that the difference between these two functions becomes 0. Clearly, the result for your parameters is not unique, they could also be all 0.
I hope that helps to make your actual functions work.
EDIT:
As you asked for least_square
, that also works fine (use function definition from above); then the total difference is ok:
from scipy.optimize import least_squares
lb = [0,0,0,0,0,0,0,0,0,-0.9]
ub = [0.1,0.1,0.1,0.1,0.1,1, 100, 1, 0.5, 0.9]
res_lsq = least_squares(func1, x0, bounds=(lb, ub))
Then you receive the same result as above:
res_lsq.x
array([ 1.00000000e-10, 1.00000000e-10, 1.00000000e-10,
1.00000000e-10, 1.00000000e-10, 1.52542373e-01,
1.52542373e-01, 1.52542373e-01, 1.52542373e-02,
-4.72881356e-01])
res_lsq.fun
array([ -6.88463034e-11]) # basically 0
As 5 parameters won't be varied in this problem, I would fix them to a certain value and would not pass them to the optimization call.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With