I'm trying to maximize a utility function by finding the optimal N
units a person would use. One of the constraints is that they have finite money, m
. So I'm trying to set up a constraint where array of length 3, N
times the prices, P
also an array of length 3, cannot be greater than m
.
Like the example as follows:
P = np.array([3,4,5])
N = np.array([1,2,1])
m = 50
sum(P*N) > m
For this optimization, P
is given based on a previous optimization. Now here's my actual code:
cons_c = [{'type':'ineq', 'fun': lambda N: 10 - sum(np.round(N)*P)},{'type':'ineq', 'fun': lambda N: 24 - sum(N*T)}]
bnds = [(0.,None) for x in range(len(N))]
optimized_c = scipy.optimize.minimize(utility_c, N, (P,Q,T), method='SLSQP', bounds=bnds, constraints=cons_c)
The function:
def utility_c(N,P,Q,T):
print "N: {0}".format(N)
print "P: {0}".format(P)
print "Q: {0}".format(Q)
print "T: {0}".format(T)
N = np.round(N)
m = 10 - sum(N*P)
b = sum(N*Q)
t = 24 - sum(N*T)
print "m in C: {0}".format(m)
print "b: {0}".format(b)
print "t: {0}".format(t)
# if m < 0 or t < 0:
# return 0
return 1/ ((b**0.3)*(t**0.7))+(5*(m**0.5))
The problem is I still get negative m
! So clearly I'm not passing the constraint properly. I'm guessing that it's because P
isn't used properly?
Output:
N: [ 1. 1. 1.]
P: [ 5. 14. 4.]
Q: [ 1. 3. 1.]
T: [ 1. 1. 1.01]
m in C: -13.0
What I've tried:
I've also tried passing P
in args, like so:
cons_c = [{'type':'ineq', 'fun': lambda N,P: 10 - sum(np.round(N)*P), 'args':P},{'type':'ineq', 'fun': lambda N: 24 - sum(N*T)}]
But it tells me `Lambda wants 2-arguments and received 4
** Update: **
using (F,)
in 'args'
does not allow the program to run without raising an error, however the constraint still fails to hold up.
Also, nan
is returned after m
is defined as a negative value, which of course throws the whole scipy optimization out of wack.
** Full project code:**
import scipy.optimize
import numpy as np
import sys
def solve_utility(P,Q,T):
"""
Here we are given the pricing already (P,Q,T), but solve for the quantities each type
would purchase in order to maximize their utility (N).
"""
def utility_a(N,P,Q,T):
N = np.round(N)
m = 50 - sum(N*P)
b = sum(N*Q)
t = 8 - sum(N*T)
return 1/ ((b**0.5)*(t**0.5))+(5*(m**0.5))
def utility_b(N,P,Q,T):
N = np.round(N)
m = 50 - sum(N*P)
b = sum(N*Q)
t = 8 - sum(N*T)
return 1/ ((b**0.7)*(t**0.3))+(5*(m**0.5))
def utility_c(N,P,Q,T):
N = np.round(N)
print "N: {0}".format(N)
print "P: {0}".format(P)
print "Q: {0}".format(Q)
print "T: {0}".format(T)
m = 10 - sum(N*P)
b = sum(N*Q)
t = 24 - sum(N*T)
print "m in C: {0}".format(m)
print "b: {0}".format(b)
print "t: {0}".format(t)
return 1/ ((b**0.3)*(t**0.7))+(5*(m**0.5))
# Establishing constraints so no negative money or time:
N = np.array([2,2,1])
cons_a = [{'type':'ineq', 'fun': lambda N, P: 50 - sum(np.round(N)*P), 'args':(P,)},{'type':'ineq', 'fun': lambda N: 8 - sum(N*T)}]
cons_b = [{'type':'ineq', 'fun': lambda N, P: 50 - sum(np.round(N)*P), 'args':(P,)},{'type':'ineq', 'fun': lambda N: 8 - sum(N*T)}]
cons_c = [{'type':'ineq', 'fun': lambda N, P: 10 - sum(np.round(N)*P), 'args':(P,)},{'type':'ineq', 'fun': lambda N: 24 - sum(N*T)}]
maxes = P/50
bnds = [(0.,None) for x in range(len(N))]
b = [()]
optimized_a = scipy.optimize.minimize(utility_a, N, (P,Q,T), method='SLSQP', constraints=cons_a)
optimized_b = scipy.optimize.minimize(utility_b, N, (P,Q,T), method='SLSQP', constraints=cons_b)
optimized_c = scipy.optimize.minimize(utility_c, N, (P,Q,T), method='SLSQP', constraints=cons_c)
if not optimized_a.success:
print "Solving Utilities A didn't work..."
return None
if not optimized_b.success:
print "Solving Utilities B didn't work..."
return None
if not optimized_c.success:
print "Solving Utilities C didn't work..."
return None
else:
print "returning N: {0}".format(np.array([optimized_a.x,optimized_b.x,optimized_c.x]))
return np.array([optimized_a.x,optimized_b.x,optimized_c.x])
# solve_utility(P,Q,T,N)
def solve_profits():
"""
Here we build the best pricing strategy to maximize solve_profits
"""
P = np.array([ 3, 10.67, 2.30]) # Pricing
Q = np.array([ 1, 4, 1]) # Quantity of beer for each unit
T = np.array([ 1, 1, 4]) # Time cost per unit
N = np.array([ 1, 0, 1]) # Quantities of unit taken by customer
def profit(X):
P,Q,T = X[0:3], X[3:6], X[6:9]
Q[1] = round(Q[1]) # needs to be an integer
N = solve_utility(P,Q,T)
print "N: {0}".format(N)
N = np.sum(N,axis=1)
# print "P: {0}".format(P)
# print "Q: {0}".format(Q)
# print "T: {0}".format(T)
denom = sum(N*P*Q) - sum(Q*N)
return 1/ (sum(N*P*Q) - sum(Q*N))
cons = [{'type':'ineq', 'fun': lambda X: X[8] - X[6] - 0.01 }, # The time expense for a coupon must be 0.01 greater than regular
{'type':'ineq', 'fun': lambda X: X[4] - 2 }, # Packs must contain at least 2 beers
{'type':'eq', 'fun': lambda X: X[3] - 1}, # Quantity has to be 1 for single beer
{'type':'eq', 'fun': lambda X: X[5] - 1}, # same with coupons
{'type':'ineq', 'fun': lambda X: X[6] - 1}, # Time cost must be at least 1
{'type':'ineq', 'fun': lambda X: X[7] - 1},
{'type':'ineq', 'fun': lambda X: X[8] - 1},
]
X = np.concatenate([P,Q,T])
optimized = scipy.optimize.minimize(profit, X, method='L-BFGS-B', constraints=cons)
if not optimized.success:
print "Solving Profits didn't work..."
else:
return optimized.x, N
X, N = solve_profits()
print "X: {0} N {1}".format(X,N)
P,Q,T = X[0:3], X[3:6], X[6:9]
rev = sum(P * Q * N)
cost = sum(Q * N)
profit = (rev-cost)*50
print "N: {0}".format(N)
print "P: {0}".format(P)
print "Q: {0}".format(Q)
print "T: {0}".format(T)
print "profit = {0}".format(profit)
NumPy/SciPy's functions are usually optimized for multithreading. Did you look at your CPU utilization to confirm that only one core is being used while the simulation is being ran? Otherwise you have nothing to gain from running multiple instances.
optimize. minimize can be terminated by using tol and maxiter (maxfev also for some optimization methods). There are also some method-specific terminators like xtol, ftol, gtol, etc., as mentioned on scipy.
If you isolate the for optimized_a and run, you see the error it throws is error 8 - this is the positive derivative error.
Both BFGS and SLSQP are gradient search methods, which means they take your initial guess, and evaluate the gradient and its derivative, and look for the best direction to take a step in, always stepping downhill and stopping when the change in value is either below the tolerance you set or it reaches a minimum.
The error suggests that (at least at your intial guess), the problem does not have a strong derivative. In general, SQLSP is best used on problems that can be formulated as a sum of squares. Perhaps trying more realistic initial guess would help. I would definitely discard most of the code and run a minimal example with optimized_a first, and once you get that working the rest can follow.
Perhaps a non gradient based solver would work, or depending on problem size and the realistic bounds you have on parameters, a global optimization may be workable.
scipy optimize is not great if you do not have a nice derivative to follow
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With