Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python: multivariate non-linear solver with constraints

Given a function f(x) that takes an input vector x and returns a vector of the same length, how can you find the roots of the function setting constraints on x? (E.g. a range for each component of x.)

To my surprise I could not find a lot of useful information about this. In the scipy list for Optimization and Root finding algorithms there seem to be some options for scalar functions such as brentq. I can not find any algorithms that supports such an option for the multivariate case though.

Of course one could do a work-around like squaring each component of the returned vector and then use one of the minimizers such as differential_evolution (this is the only one I think actually). I can not imagine that this is a good strategy though, since it kills the quadratic convergence of Newton's algorithm. Also I find it really surprising that there does not seem to be an option for this, since it must be a really common problem. Have I missed something?

like image 413
Wolpertinger Avatar asked May 16 '17 08:05

Wolpertinger


3 Answers

One (not particularly nice but hopefully working) option to work around this problem would be to give the solver a function that only has roots in the constrained region and that is continued in a way ensuring that the solver is pushed back in the proper region (a little bit like here but in multiple dimensions).

What one might do to achieve this (at least for rectangular constraints) is to implement a constrainedFunction that is linearly continued starting from the border value of your function:

import numpy as np

def constrainedFunction(x, f, lower, upper, minIncr=0.001):
     x = np.asarray(x)
     lower = np.asarray(lower)
     upper = np.asarray(upper)
     xBorder = np.where(x<lower, lower, x)
     xBorder = np.where(x>upper, upper, xBorder)
     fBorder = f(xBorder)
     distFromBorder = (np.sum(np.where(x<lower, lower-x, 0.))
                      +np.sum(np.where(x>upper, x-upper, 0.)))
     return (fBorder + (fBorder
                       +np.where(fBorder>0, minIncr, -minIncr))*distFromBorder)

You can pass this function an x value, the function f that you want to continue, as well as two arrays lower and upper of the same shape like x giving the lower and upper bounds in all dimensions. Now you can pass this function rather than your original function to the solver to find the roots.

The steepness of the continuation is simply taken as the border value at the moment to prevent steep jumps for sign changes at the border. To prevent roots outside the constrained region, some small value is added/substracted to positive/negative boundary values. I agree that this is not a very nice way to handle this, but it seems to work.

Here are two examples. For both the initial guess is outside the constrained region but a correct root in the constrained region is found.

Finding the roots of a multidimensional cosine constrained to [-2, -1]x[1, 2] gives:

from scipy import optimize as opt

opt.root(constrainedFunction, x0=np.zeros(2),
         args=(np.cos, np.asarray([-2., 1.]), np.asarray([-1, 2.])))

gives:

    fjac: array([[ -9.99999975e-01,   2.22992740e-04],
       [  2.22992740e-04,   9.99999975e-01]])
     fun: array([  6.12323400e-17,   6.12323400e-17])
 message: 'The solution converged.'
    nfev: 11
     qtf: array([ -2.50050470e-10,  -1.98160617e-11])
       r: array([-1.00281376,  0.03518108, -0.9971942 ])
  status: 1
 success: True
       x: array([-1.57079633,  1.57079633])

This also works for functions that are not diagonal:

def f(x):
    return np.asarray([0., np.cos(x.sum())])

opt.root(constrainedFunction, x0=np.zeros(2),
         args=(f, np.asarray([-2., 2.]), np.asarray([-1, 4.])))

gives:

    fjac: array([[ 0.00254922,  0.99999675],
       [-0.99999675,  0.00254922]])
     fun: array([  0.00000000e+00,   6.12323400e-17])
 message: 'The solution converged.'
    nfev: 11
     qtf: array([  1.63189544e-11,   4.16007911e-14])
       r: array([-0.75738638, -0.99212138, -0.00246647])
  status: 1
 success: True
       x: array([-1.65863336,  3.22942968])
like image 170
jotasi Avatar answered Sep 20 '22 15:09

jotasi


If you want to handle an optimization with constraints, you can use the facile lirbary, which is a lot easier than scipy.optimize

Here is the link to the package :

https://pypi.python.org/pypi/facile/1.2

Here's how to use the facile library for your example. You will need to refine what I write here, which is only general. If you have Errors raised, tell me which.

import facile

# Your vector x 

x = [ facile.variable('name', min, max) for i in range(Size) ]


# I give an example here of your vector being ordered and each component in a range
# You could as well put in the range where declaring variables

for i in range(len(x)-1):
    facile.constraint( x[i] < x[i+1])
    facile.constraint( range[i,0] < x[i] < range[i,1] ) #Supposed you have a 'range' array where you store the range for each variable


def function(...)
 # Define here the function you want to find roots of


 # Add as constraint that you want the vector to be a root of function
facile.constraint(function(x) == 0)


# Use facile solver
if facile.solve(x):
    print [x[i].value() for i in range(len(x))]
else:
    print "Impossible to find roots"
like image 29
Quentin Brzustowski Avatar answered Sep 18 '22 15:09

Quentin Brzustowski


At the risk of suggesting something you might've already crossed off, I believe this should feasible with just scipy.minimize. The catch is that the function must have only one argument, but that argument can be a vector/list.

So f(x, y) becomes just f(z) where z = [x, y].

A good example that you might find useful if you haven't come across is here.

If you want to impose bounds, as you mentioned, for a 2x1 vector, you could use:

# Specify a (lower, upper) tuple for each component of the vector    
bnds = [(0., 1.) for i in len(x)]

And use this as the bounds parameter within minimize.

like image 26
Brad Solomon Avatar answered Sep 22 '22 15:09

Brad Solomon