Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Trying to maximize this simple non linear problem using #gekko but getting this error

(@error: No solution found)

positions = ["AAPL", "NVDA", "MS","CI", "HON"]
cov = df_ret.cov()
ret = df_ret.mean().values
weights = np.array(np.random.random(len(positions)))

def maximize(weights):
    std = np.sqrt(np.dot(np.dot(weights.T,cov),weights))
    p_ret = np.dot(ret.T,weights)
    sharpe = p_ret/std
    return sharpe

a = GEKKO()
w1 = a.Var(value=0.2, lb=0, ub=1)
w2 = a.Var(value=0.2, lb=0, ub=1)
w3 = a.Var(value=0.2, lb=0, ub=1)
w4 = a.Var(value=0.2, lb=0, ub=1)
w5 = a.Var(value=0.2, lb=0, ub=1)

a.Equation(w1+w2+w3+w4+w5<=1)
weight = np.array([w1,w2,w3,w4,w5])

a.Obj(-maximize(weight))
a.solve(disp=False)

**** trying to figure out why is it giving no solution as an error

# df_ret is a data frame with returns (for stocks in position)

Df_ret looks like this

# trying to maximize the sharpe ratio

# w(1 to n) are the weights with sum less than or equal to 1****

like image 367
Mohit Tuteja Avatar asked May 17 '20 02:05

Mohit Tuteja


People also ask

What is a nonlinear optimization problem?

A smooth nonlinear programming (NLP) or nonlinear optimization problem is one in which the objective or at least one of the constraints is a smooth nonlinear function of the decision variables. An example of a smooth nonlinear function is: 2 X12 + X23 + log X3. ... where X1, X2 and X3 are decision variables.

How do you solve non linear programming problems?

The least complex method for solving nonlinear programming problems is referred to as substitution. This method is restricted to models that contain only equality constraints, and typically only one of these. The method involves solving the constraint equation for one variable in terms of another.

What are non linear programming problems?

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear.

What is non linear optimization in business analytics?

In many optimization models the objective and/or the constraints are nonlinear functions of the decision variables. Such an optimization model is called a nonlinear programming (NLP) model.

What is an example of linear programming?

For example, if you want to maximize your results with a limited budget, you can use linear programming to get the most bang for your buck. For example, say that you have a new 60-gallon aquarium and want to stock it with tetras and marbled headstanders. Each tetra requires two gallons of water, and each headstander requires four gallons of water.

Is there more than one optimal solution to the LPP problem?

In this LPP problem, there is more than one optimal solution i.e (0,100/9) and (20/3,20/3). When I solve this problem using the pulp library, it gives me only a (0,100/9) solution.

What is the optimal solution to 2x1 + 3x2 = 100/3?

What this means is you can move along that top constraint from one corner to the other without changing the value of your objective function. There are infinitely many optimal solutions which solve the equation: 2x1 + 3x2 == 100/3, between x1==0, and x1==20/3.

Can You Move Along the top constraint without changing the objective?

What this means is you can move along that top constraint from one corner to the other without changing the value of your objective function. There are infinitely many optimal solutions which solve the equation:


Video Answer


2 Answers

Here is a solution with gekko:

from gekko import GEKKO
import numpy as np
import pandas as pd
a = GEKKO()

positions = ["AAPL", "NVDA", "MS","CI", "HON"]

df_ret = pd.DataFrame(np.array([[.001729, .014603, .036558, .016772, .001983],
[-0.015906, .006396, .012796, -.002163, 0],
[-0.001849, -.019598, .014484, .036856, .019292],
[.006648, .002161, -.020352, -.007580, 0.022083],
[-.008821, -.014016, -.006512, -.015802, .012583]]))
cov = df_ret.cov().values
ret = df_ret.mean().values

def obj(weights):
    std = a.sqrt(np.dot(np.dot(weights.T,cov),weights))
    p_ret = np.dot(ret.T,weights)
    sharpe = p_ret/std
    return sharpe

a = GEKKO()
w = a.Array(a.Var,len(positions),value=0.2,lb=1e-5, ub=1)
a.Equation(a.sum(w)<=1)
a.Maximize(obj(w))
a.solve(disp=False)

print(w)

A couple things that I've done to problem is to use the Array function to create the variable weights w. I also switched to using the gekko sqrt so that it does the automatic differentiation for the objective function. I also added a lower bound of 1e-5 to avoid sqrt(0) and divide by zero. The Obj() function minimizes so I removed the negative sign and use the Maximize() function to make it more readable. It produces this solution for w:

[[1e-05] [0.15810629919] [0.19423029287] [1e-05] [0.6476428726]]

Many are more familiar with scipy. Here is a benchmark problem where the same problem is solved with scipy.minimize.optimize and gekko. There is also a link for that same solution with MATLAB fmincon or gekko with MATLAB.

like image 151
John Hedengren Avatar answered Oct 13 '22 06:10

John Hedengren


I am not familiar with GEKKO so I can't really help with that package, but incase someone doesn't answer how to do it using GEKKO, here's a potential solution with scipy.optimize.minimize:

from scipy.optimize import minimize
import numpy as np
import pandas as pd



def OF(weights, cov, ret, sign = 1.0):
  std = np.sqrt(np.dot(np.dot(weights.T,cov),weights))
  p_ret = np.dot(ret.T,weights)
  sharpe = p_ret/std
  return sign*sharpe


if __name__ == '__main__':

  x0 = np.array([0.2,0.2,0.2,0.2,0.2])
  df_ret = pd.DataFrame(np.array([[.001729, .014603, .036558, .016772, .001983],
[-0.015906, .006396, .012796, -.002163, 0],
[-0.001849, -.019598, .014484, .036856, .019292],
[.006648, .002161, -.020352, -.007580, 0.022083],
[-.008821, -.014016, -.006512, -.015802, .012583]]))
  cov = df_ret.cov()
  ret = df_ret.mean().values


  minx0 = np.repeat(0, [len(x0)] , axis = 0)
  maxx0 = np.repeat(1, [len(x0)] , axis = 0)
  bounds = tuple(zip(minx0, maxx0))

  cons = {'type':'ineq', 
  'fun':lambda weights: 1 - sum(weights)}
  res_cons = minimize(OF, x0, (cov, ret, -1), bounds = bounds, constraints=cons, method='SLSQP')



  print(res_cons)
  print('Current value of objective function: ' + str(res_cons['fun']))
  print('Current value of controls:')
  print(res_cons['x'])

which outputs:

     fun: -2.1048843911794486
     jac: array([ 5.17067784e+00, -2.36839056e-04, -6.24716282e-04,  6.56819057e+00,
        2.45392323e-04])
 message: 'Optimization terminated successfully.'
    nfev: 69
     nit: 9
    njev: 9
  status: 0
 success: True
       x: array([5.47832097e-14, 1.52927443e-01, 1.87864415e-01, 5.32258098e-14,
       6.26433468e-01])
Current value of objective function: -2.1048843911794486
Current value of controls:
[5.47832097e-14 1.52927443e-01 1.87864415e-01 5.32258098e-14
 6.26433468e-01]

The sign parameter is added here because in order to maximize the objective function you just minimize OF*(-1). I set the default to 1 (minimize), but I pass -1 in args to change it.

like image 33
Anna Nevison Avatar answered Oct 13 '22 06:10

Anna Nevison