An optimization problem with a squared objective solves successfully with IPOPT in Python Gekko.
from gekko import GEKKO
import numpy as np
m = GEKKO()
x = m.Var(); y = m.Param(3.2)
m.Obj((x-y)**2)
m.solve()
print(x.value[0],y.value[0])
However, when I switch to an absolute value objective np.abs(x-y)
(the numpy version of abs
) or m.abs(x-y)
(the Gekko version of abs
), the IPOPT solver reports a failed solution. An absolute value approximation m.sqrt((x-y)**2)
also fails.
Failed Solution
from gekko import GEKKO
import numpy as np
m = GEKKO()
x = m.Var(); y = m.Param(3.2)
m.Obj(m.abs(x-y))
m.solve()
print(x.value[0],y.value[0])
I understand that gradient-based solvers don't like functions without continuous first and second derivatives so I suspect that this is happening with abs()
where 0
is a point that does not have continuous derivatives. Is there any alternative to abs()
to reliably solve an absolute value with gradient-based solvers in Python Gekko?
You can use m.abs2 instead, It takes into account the issue with the derivative and should solve the issue.
Here is one possible solution using gekko's binary switch variable:
from gekko import GEKKO
import numpy as np
m = GEKKO()
y = m.Param(3.2)
x = m.Var()
#intermediate
difference = m.Intermediate(x - y)
f = m.if3(difference, -difference, difference)
m.Obj(f)
m.solve()
print(x.value[0],y.value[0])
Returns: 3.2 3.2
m.if3(condition, x1, x2)
takes value as a condition, and returns x1
if condition >= 0
or x1
if condition < 0
.
There are various functions to get around this problem in the logical functions section of the documentation, including m.abs2
, m.abs3
, and m.if2
.
The type 2 functions use MPCC to solve and will continue using IPOPT. The type 3 functions will change to APOPT automatically.
https://github.com/BYU-PRISM/GEKKO/blob/master/docs/model_methods.rst https://gekko.readthedocs.io/en/latest/model_methods.html#logical-functions
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With