Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

using data.table with multiple threads in R

Is there a way to utilize multiple threads for computation using data.table in R? For example let's say i have the following data.table:

dtb <- data.table(id=rep(1:10000, 1000), x=1:1e7)
setkey(dtb, id)
f <- function(m) { #some really complicated function }
res <- dtb[,f(x), by=id]

Is there a way to get R to multithread this if f takes a while to compute? What about in the case that f is quick, will multithreading help or is most of the time going to be taken by data.table in splitting things up into groups?

like image 838
Alex Avatar asked Aug 17 '12 20:08

Alex


People also ask

Are data tables multithreaded?

By default data. table will be multi-threaded again; restoring the prior setting of getDTthreads() .

Does R support multi threading?

The multi-core machines of today offer parallel processing power. To take advantage of this, Microsoft R Open provides optional multi-threaded math libraries. These libraries make it possible for so many common R operations to use all of the processing power available.


1 Answers

I am not sure that this is "multi-threading", but perhaps you meant to include a multi-core solution? If so, then look at this earlier answer: Performing calculations by subsets of data in R found with a search for "[r] [data.table] parallel"

Edit: (doubling of speed on a 4 core machine, but my system monitor suggests this only used 2 cores during the mclapply call.) Code copied from this thread: http://r.789695.n4.nabble.com/Access-to-local-variables-in-quot-j-quot-expressions-tt2315330.html#a2315337

 calc.fake.dt.mclapply <- function (dt) {
     mclapply(6*c(1000,1:4,6,8,10),
              function(critical.age) {
                  dt$tmp <-  pmax((dt$age <  critical.age) * dt$x, 0)
                  dt[, cumsum.lag(tmp), by = grp]$V1})
 } 
 mk.fake.df <- function (n.groups=10000, n.per.group=70) {
    data.frame(grp=rep(1:n.groups, each=n.per.group),
               age=rep(0:(n.per.group-1), n.groups),
               x=rnorm(n.groups * n.per.group),
               ## These don't do anything, but only exist to give
               ## the table a similar size to the real data.
               y1=rnorm(n.groups * n.per.group),
               y2=rnorm(n.groups * n.per.group),
               y3=rnorm(n.groups * n.per.group),
               y4=rnorm(n.groups * n.per.group)) } 
 df <- mk.fake.df 
 df <- mk.fake.df()
 calc.fake.dt.lapply <- function (dt) { # use base lapply for testing
     lapply(6*c(1000,1:4,6,8,10),
            function(critical.age) {
                dt$tmp <-  pmax((dt$age <  critical.age) * dt$x, 0)
                dt[, cumsum.lag(tmp), by = grp]$V1})
 } 
 mk.fake.dt <- function (fake.df) {
    fake.dt <- as.data.table(fake.df)
    setkey(fake.dt, grp, age)
    fake.dt
  } 
 dt <- mk.fake.dt()

require(data.table)
dt <- mk.fake.dt(df)

 cumsum.lag <- function (x) {
    x.prev <- c(0, x[-length(x)])
    cumsum(x.prev)
  } 
 system.time(res.dt.mclapply <- calc.fake.dt.mclapply(dt))
  user  system elapsed 
 1.896   4.413   1.210 

system.time(res.dt.lapply   <- calc.fake.dt.lapply(dt))
   user  system elapsed 
  1.391   0.793   2.175 
like image 151
IRTFM Avatar answered Nov 15 '22 04:11

IRTFM