I'm trying to perform a constrained linear optimization in Matlab with a fairly complicated objective function. This objective function, as is, will yield errors for input values that don't meet the linear inequality constraints I've defined. I know there are a few algorithms that enforce strict adherence to bounds at every iteration, but does anyone know of any algorithms (or other mechanisms) to enforce strict adherence to linear (inequality) constraints at each iteration?
I could make my objective function return zero at any such points, but I'm worried about introducing large discontinuities.
Disclaimer: I'm not an optimization maven. A few ideas though:
To expand on DanielTheRocketMan's suggestion, you can use a log barrier function to represent the constraint. If you have constraint g(x) <= 0 and objective to minimize is f(x) then you can define a new objective:
fprim(x) = f(x) - (1/t) * log(-g(x))
where t is a parameter defining how sharp to make the constraint. As g(x) approaches 0 from below, -log(-g(x)) goes to infinity, penalizing the objective function for getting close to violating the constraint. A higher value of t lets g(x) get closer to 0.
If your constraints are linear, that should be easy to pass to fmincon. Use one of the algorithms that satisfy strict feasibility.
Obvious point, but if your problem isn't convex, you may have big problems with getting stuck at local optima rather than finding the global optima. Always something to be aware of.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With