Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Parallelized optimization in R

Tags:

I'm running R on linux box that has 8 multicore processors, and have an optimization problem I'd like to speed up by parallelizing the optimization routine itself. Importantly, this problem involves (1) multiple parameters, and (2) inherently slow model runs. A fairly common problem!

Anyone know of a parallelized optimizer for such occasions?

More specifically, solvers like nlm() run multiple model evaluations (two per parameter value) each time the algorithm takes a step in parameter space, so parallelizing that instance of multiple model runs would greatly speed things up in these situations when more than a few parameter values are being fit.

It seems like code that makes use of the package parallel could be written in a way that the user would have to do minimal code modification to move from using nlm() or optim() to this parallelized optimization routine. That is, it seems one could rewrite these routines basically with no changes, except that the step of calling the model multiple times, as is common in gradient-based methods, would be done in parallel.

Ideally, something like nlmPara() would take code that looks like

fit <- nlm(MyObjFunc, params0);

and require only minor modifications, e.g.,

fit <- nlmPara(MyObjFunc, params0, ncores=6);

Thoughts/suggestions?

PS: I've taken steps to speed up those model runs, but they're slow for a variety of reasons (i.e. I don't need advice on speeding up the model runs! ;-) ).

like image 501
Paul J Hurtado Avatar asked Mar 13 '13 22:03

Paul J Hurtado


People also ask

How do I speed up my Optim in R?

optim uses a slow algorithm as the default. You can gain a > 5-fold speedup by using the Quasi-Newton algorithm (method="BFGS") instead of the default. If you're not concerned too much about the last digits, you can also set the tolerance levels higher of nlm() to gain extra speed.

Does R support parallel computing?

There are various packages in R which allow parallelization. “parallel” Package The parallel package in R can perform tasks in parallel by providing the ability to allocate cores to R. The working involves finding the number of cores in the system and allocating all of them or a subset to make a cluster.

How do I run multiple cores in R?

If you are on a single host, a very effective way to make use of these extra cores is to use several R instances at the same time. The operating system will indeed always assign a different core to each new R instance. In Linux, just open several the terminal windows. Then within each terminal, type R to open R.

What is parallel computing in R?

Many computations in R can be made faster by the use of parallel computation. Generally, parallel computation is the simultaneous execution of different pieces of a larger computation across multiple computing processors or cores.


2 Answers

Here is a rough solution, that at least has some promise. Big thanks to Ben Bolker for pointing out that many/most optimization routines allow user-specified gradient functions.

A test problem with more parameter values might show more significant improvements, but on an 8 core machine the run using the parallelized gradient function takes about 70% as long as the serial version. Note the crude gradient approximation used here seems to slow convergence and thus adds some time to the process.

## Set up the cluster
require("parallel");
.nlocalcores = NULL; # Default to "Cores available - 1" if NULL.
if(is.null(.nlocalcores)) { .nlocalcores = detectCores() - 1; }
if(.nlocalcores < 1) { print("Multiple cores unavailable! See code!!"); return()}
print(paste("Using ",.nlocalcores,"cores for parallelized gradient computation."))
.cl=makeCluster(.nlocalcores);
print(.cl)


# Now define a gradient function: both in serial and in parallel
mygr <- function(.params, ...) {
  dp = cbind(rep(0,length(.params)),diag(.params * 1e-8)); # TINY finite difference
  Fout = apply(dp,2, function(x) fn(.params + x,...));     # Serial 
  return((Fout[-1]-Fout[1])/diag(dp[,-1]));                # finite difference 
}

mypgr <- function(.params, ...) { # Now use the cluster 
  dp = cbind(rep(0,length(.params)),diag(.params * 1e-8));   
  Fout = parCapply(.cl, dp, function(x) fn(.params + x,...)); # Parallel 
  return((Fout[-1]-Fout[1])/diag(dp[,-1]));                  #
}


## Lets try it out!
fr <- function(x, slow=FALSE) { ## Rosenbrock Banana function from optim() documentation.
  if(slow) { Sys.sleep(0.1); }   ## Modified to be a little slow, if needed.
  x1 <- x[1]
  x2 <- x[2]
  100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}

grr <- function(x, slow=FALSE) { ## Gradient of 'fr'
  if(slow) { Sys.sleep(0.1); }   ## Modified to be a little slow, if needed.
  x1 <- x[1]
  x2 <- x[2]
  c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
    200 *      (x2 - x1 * x1))
}

## Make sure the nodes can see these functions & other objects as called by the optimizer
fn <- fr;  # A bit of a hack
clusterExport(cl, "fn");

# First, test our gradient approximation function mypgr
print( mypgr(c(-1.2,1)) - grr(c(-1.2,1)))

## Some test calls, following the examples in the optim() documentation
tic = Sys.time();
fit1 = optim(c(-1.2,1), fr, slow=FALSE);                          toc1=Sys.time()-tic
fit2 = optim(c(-1.2,1), fr, gr=grr, slow=FALSE, method="BFGS");   toc2=Sys.time()-tic-toc1
fit3 = optim(c(-1.2,1), fr, gr=mygr, slow=FALSE, method="BFGS");  toc3=Sys.time()-tic-toc1-toc2
fit4 = optim(c(-1.2,1), fr, gr=mypgr, slow=FALSE, method="BFGS"); toc4=Sys.time()-tic-toc1-toc2-toc3


## Now slow it down a bit
tic = Sys.time();
fit5 = optim(c(-1.2,1), fr, slow=TRUE);                           toc5=Sys.time()-tic
fit6 = optim(c(-1.2,1), fr, gr=grr, slow=TRUE, method="BFGS");    toc6=Sys.time()-tic-toc5
fit7 = optim(c(-1.2,1), fr, gr=mygr, slow=TRUE, method="BFGS");   toc7=Sys.time()-tic-toc5-toc6
fit8 = optim(c(-1.2,1), fr, gr=mypgr, slow=TRUE, method="BFGS");  toc8=Sys.time()-tic-toc5-toc6-toc7

print(cbind(fast=c(default=toc1,exact.gr=toc2,serial.gr=toc3,parallel.gr=toc4),
            slow=c(toc5,toc6,toc7,toc8)))
like image 162
Paul J Hurtado Avatar answered Sep 22 '22 08:09

Paul J Hurtado


I am the author of the R package optimParallel. It provides parallel versions of the gradient-based optimization methods of optim(). The main function of the package is optimParallel(), which has the same usage and output as optim(). Using optimParallel() can significantly reduce optimization times as illustrated in the following figure (p is the number of paramters).

enter image description here

See https://cran.r-project.org/package=optimParallel and http://arxiv.org/abs/1804.11058 for more information.

like image 45
Nairolf Avatar answered Sep 22 '22 08:09

Nairolf