Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Errors when attempting constrained optimisation using optim()

I have been using the Excel solver to handle the following problem

solve for a b and c in the equation:

y = a*b*c*x/((1 - c*x)(1 - c*x + b*c*x))

subject to the constraints

0 < a < 100
0 < b < 100
0 < c < 100

f(x[1]) < 10
f(x[2]) > 20
f(x[3]) < 40

where I have about 10 (x,y) value pairs. I minimize the sum of abs(y - f(x)). And I can constrain both the coefficients and the range of values for the result of my function at each x.

I tried nls (without trying to impose the constraints) and while Excel provided estimates for almost any starting values I cared to provide, nls almost never returned an answer.

I switched to using optim, but I'm having trouble applying the constraints.

This is where I have gotten so far-

best = function(p,x,y){sum(abs(y - p[1]*p[2]*p[3]*x/((1 - p[3]*x)*(1 - p[3]*x + p[2]*p[3]*x))))}
p = c(1,1,1)
x = c(.1,.5,.9)
y = c(5,26,35)
optim(p,best,x=x,y=y)

I did this to add the first set of constraints-

optim(p,best,x=x,y=y,method="L-BFGS-B",lower=c(0,0,0),upper=c(100,100,100))

I get the error ""ERROR: ABNORMAL_TERMINATION_IN_LNSRCH"

and end up with a higher value of the error ($value). So it seems like I am doing something wrong. I couldn't figure out how to apply my other set of constraints at all.

Could someone provide me a basic idea how to solve this problem that a non-statistician can understand? I looked at a lot of posts and looked in a few R books. The R books stopped at the simplest use of optim.

like image 828
Jon Swanson Avatar asked Jul 08 '12 22:07

Jon Swanson


1 Answers

The absolute value introduces a singularity: you may want to use a square instead, especially for gradient-based methods (such as L-BFGS).

The denominator of your function can be zero.

The fact that the parameters appear in products and that you allow them to be (arbitrarily close to) zero can also cause problems.

You can try with other optimizers (complete list on the optimization task view), until you find one for which the optimization converges.

x0 <- c(.1,.5,.9)
y0 <- c(5,26,35)
p <- c(1,1,1)
lower <- 0*p
upper <- 100 + lower
f <- function(p,x=x0,y=y0) sum( 
  (
    y - p[1]*p[2]*p[3]*x / ( (1 - p[3]*x)*(1 - p[3]*x + p[2]*p[3]*x) ) 
  )^2
)

library(dfoptim)
nmkb(p, f, lower=lower, upper=upper) # Converges

library(Rvmmin)
Rvmmin(p, f, lower=lower, upper=upper) # Does not converge

library(DEoptim)
DEoptim(f, lower, upper) # Does not converge

library(NMOF)
PSopt(f, list(min=lower, max=upper))[c("xbest", "OFvalue")] # Does not really converge
DEopt(f, list(min=lower, max=upper))[c("xbest", "OFvalue")] # Does not really converge

library(minqa)
bobyqa(p, f, lower, upper) # Does not really converge

As a last resort, you can always use a grid search.

library(NMOF)
r <- gridSearch( f, 
  lapply(seq_along(p), function(i) seq(lower[i],upper[i],length=200)) 
)
like image 56
Vincent Zoonekynd Avatar answered Dec 08 '22 03:12

Vincent Zoonekynd