Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Optimization with Python (scipy.optimize)

I am trying to maximize the following function using Python's scipy.optimize. However, after lots of trying, it doesn't seem to work. The function and my code are pasted below. Thanks for helping!

Problem

Maximize [sum (x_i / y_i)**gamma]**(1/gamma)
subject to the constraint sum x_i = 1; x_i is in the interval (0,1). 

x is a vector of choice variables; y is a vector of parameters; gamma is a parameter. The xs must sum to one. And each x must be in the interval (0,1).

Code

def objective_function(x, y):
    sum_contributions = 0
    gamma = 0.2

    for count in xrange(len(x)):
        sum_contributions += (x[count] / y[count]) ** gamma
    value = math.pow(sum_contributions, 1 / gamma)
    return -value

cons = ({'type': 'eq', 'fun': lambda x: np.array([sum(x) - 1])})

y = [0.5, 0.3, 0.2]
initial_x = [0.2, 0.3, 0.5]

opt = minimize(objective_function, initial_x, args=(y,), method='SLSQP',  
constraints=cons,bounds=[(0, 1)] * len(x))
like image 965
user58925 Avatar asked Aug 27 '15 14:08

user58925


People also ask

Does optimize belong to SciPy?

The scipy. optimize package provides several commonly used optimization algorithms. A detailed listing is available: scipy. optimize (can also be found by help(scipy.

Is SciPy optimize multithreaded?

NumPy/SciPy's functions are usually optimized for multithreading. Did you look at your CPU utilization to confirm that only one core is being used while the simulation is being ran? Otherwise you have nothing to gain from running multiple instances.


2 Answers

Sometimes, numerical optimizer doesn't work for whatever reason. We can parametrize the problem slightly different and it will just work. (and might work faster)

For example, for bounds of (0,1), we can have a transform function such that values in (-inf, +inf), after being transformed, will end up in (0,1)

We can do a similar trick with the equality constraints. For example, we can reduce the dimension from 3 to 2, since the last element in x has to be 1-sum(x).

If it still won't work, we can switch to a optimizer that dose not require information from derivative, such as Nelder Mead.

And also there is Lagrange multiplier.

In [111]:

def trans_x(x):
    x1 = x**2/(1+x**2)
    z  = np.hstack((x1, 1-sum(x1)))
    return z

def F(x, y, gamma = 0.2):
    z = trans_x(x)
    return -(((z/y)**gamma).sum())**(1./gamma)
In [112]:

opt = minimize(F, np.array([0., 1.]), args=(np.array(y),),
               method='Nelder-Mead')
opt
Out[112]:
  status: 0
    nfev: 96
 success: True
     fun: -265.27701747828007
       x: array([ 0.6463264,  0.7094782])
 message: 'Optimization terminated successfully.'
     nit: 52

The result is:

In [113]:

trans_x(opt.x)
Out[113]:
array([ 0.29465097,  0.33482303,  0.37052601])

And we can visualize it, with:

In [114]:

x1 = np.linspace(0,1)
y1 = np.linspace(0,1)
X,Y = np.meshgrid(x1,y1)
Z = np.array([F(item, y) for item 
              in np.vstack((X.ravel(), Y.ravel())).T]).reshape((len(x1), -1), order='F')
Z = np.fliplr(Z)
Z = np.flipud(Z)
plt.contourf(X, Y, Z, 50)
plt.colorbar()

enter image description here

like image 147
CT Zhu Avatar answered Sep 29 '22 00:09

CT Zhu


Even tough this questions is a bit dated I wanted to add an alternative solution which might be useful for others stumbling upon this question in the future.

It turns our your problem is solvable analytically. You can start by writing down the Lagrangian of the (equality constrained) optimization problem:

L = \sum_i (x_i/y_i)^\gamma - \lambda (\sum x_i - 1)

The optimal solution is found by setting the first derivative of this Lagrangian to zero:

0 = \partial L / \partial x_i = \gamma x_i^{\gamma-1}/\y_i - \lambda
=> x_i \propto y_i^{\gamma/(\gamma - 1)}

Using this insight the optimization problem can be solved simply and efficiently by:

In [4]:
def analytical(y, gamma=0.2):
    x = y**(gamma/(gamma-1.0))
    x /= np.sum(x)
    return x
xanalytical = analytical(y)
xanalytical, objective_function(xanalytical, y)
Out [4]:
(array([ 0.29466774,  0.33480719,  0.37052507]), -265.27701765929692)

CT Zhu's solution is elegant but it might violate the positivity constraint on the third coordinate. For gamma = 0.2 this does not seem to be a problem in practice, but for different gammas you easily run into trouble:

In [5]:
y = [0.2, 0.1, 0.8]
opt = minimize(F, np.array([0., 1.]), args=(np.array(y), 2.0),
               method='Nelder-Mead')
trans_x(opt.x), opt.fun
Out [5]:
(array([ 1.,  1., -1.]), -11.249999999999998)

For other optimization problems with the same probability simplex constraints as your problem, but for which there is no analytical solution, it might be worth looking into projected gradient methods or similar. These methods leverage the fact that there is fast algorithm for the projection of an arbitrary point onto this set see https://en.wikipedia.org/wiki/Simplex#Projection_onto_the_standard_simplex.

(To see the complete code and a better rendering of the equations take a look at the Jupyter notebook http://nbviewer.jupyter.org/github/andim/pysnippets/blob/master/optimization-simplex-constraints.ipynb)

like image 24
Andi Avatar answered Sep 29 '22 00:09

Andi