Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

R data.table by list of conditionals or row indices

I have a data table that includes distances. I want to run various operations within the data.table by my "id" variable and an inclusive distance threshold (e.g. Dist<1, Dist<2, etc.). I know how to run an operation by id and distance "by=list(id,Dist)", but I really want a by variable more like, "by=list(id,c(Dist<=1,Dist<=2,Dist<=3,Dist<=4,Dist<=5). Below is an example of my data structure and objective.

#load library
    library(data.table)
#create data  
    set.seed(123L)
    dt<-data.table(id=factor(rep(1:10,5)),V1=rnorm(50,5,5),Dist=sample(1:5,50,replace=T))
#calculate mean of V1 by id and distance (wrong results)
    dt2<-dt[,.(MeanV1=mean(V1)),by=list(id,Dist)]
#calculate mean of V1 by id and conditional distance (right results, wrong method)
    dt2.1<-dt[Dist<=1,.(MeanV1=mean(V1)),by=id]
    dt2.2<-dt[Dist<=2,.(MeanV1=mean(V1)),by=id]
    dt2.3<-dt[Dist<=3,.(MeanV1=mean(V1)),by=id]
    dt2.4<-dt[Dist<=4,.(MeanV1=mean(V1)),by=id]
    dt2.5<-dt[Dist<=5,.(MeanV1=mean(V1)),by=id]
    dt2<-rbind(dt2.1,dt2.2,dt2.3,dt2.4,dt2.5)
#ideal methods if either were valid
    #syntax 1
        dt2<-dt[,.(MeanV1=mean(V1)),by=list(id,c(Dist<=1,Dist<=2,Dist<=3,Dist<=4,Dist<=5))]
    #syntax 2
        rowindices<-list(dt$Dist<=1,dt$Dist<=2,dt$Dist<=3,dt$Dist<=4,dt$Dist<=5)
        dt2<-dt[,.(MeanV1=mean(V1)),by=list(id,rowindices)]

Thanks in advance.

like image 452
Brian A Avatar asked Mar 07 '23 14:03

Brian A


2 Answers

Frank's answer in the comments will achieve what you're after. Here's an explanation:

First, one thing you can do with data.table is a "non-equi join", which is what the first data.table call is doing.

First we create a table of thresholds we want to operate over:

> thresholds <- data.table(dist_threshold=1:5)
> thresholds
   dist_threshold
1:              1
2:              2
3:              3
4:              4
5:              5

Next we perform a non-equi join on the original table with the thresholds table: this creates a new table where the dist column contains all entries for each ID below that threshold:

> passes_threshold <- dt[thresholds, on=.(Dist < dist_threshold), # non-equi join
+    allow.cartesian=TRUE, # Fixes error, see details in ?data.table
+    nomatch=0 # Do not include thresholds which no row satisfies (i.e. Dist < 1)
+   ]
> passes_threshold
# Here the Dist column now means "Dist < dist_threshold".
# There will be 5 rows where Dist < 2, 19 where Dist < 3, 
# 30 where Dist < 4, and 40 Where Dist < 5 
    id        V1 Dist
 1:  8  8.521825    2
 2:  5  2.002523    2
 3:  6  8.698732    2
 4:  9 -1.701028    2
 5:  2  6.114119    2
---                  
90:  6 -1.392776    5
91: 10  9.033493    5
92:  1  9.565713    5
93:  5  4.579124    5
94:  7  1.498690    5

We can now combine the join with the summary operations in the j and by arguments to calculate the average distance per threshold:

> passes_threshold[,.(mean_Dist_by_threshold=mean(V1)), by=.(threshold=Dist)]
       threshold mean_Dist_per_threshold
1:         2                4.727234
2:         3                4.615258
3:         4                4.202856
4:         5                4.559240
like image 200
Scott Ritchie Avatar answered Mar 10 '23 15:03

Scott Ritchie


As a supplement to Scott's answer, his solution can be written more concisely as

dt[.(1:5), on = .(Dist < V1), allow = TRUE, nomatch = 0][
  , .(mean_Dist_by_threshold = mean(V1)), by = .(threshold = Dist)]

Here, .(1:5) creates thresholds on the fly and the data.table expressions are chained.

Alternatively, the aggregation can be done during the join using by = .EACHI:

dt[.(1:5), on = .(Dist < V1), nomatch = 0, 
   .(mean_Dist_by_threshold = mean(V1)), by = .EACHI][
     , setnames(.SD, "Dist", "threshold")]

The call to setnames() is just for convenience to return the same result as Scott's answer.

Benchmark code

library(data.table)
# create data
nr <- 5e2L
set.seed(123L) # to make the data reproducible
dt <-
  data.table(
    id = factor(rep(1:10, nr / 10)),
    V1 = rnorm(nr, 5, 5),
    Dist = sample(1:5, nr, replace = T)
  )
str(dt)

microbenchmark::microbenchmark(
  scott = {
    thresholds <- data.table(dist_threshold=1:5)
    passes_threshold <-
      dt[thresholds, on = .(Dist < dist_threshold), # non-equi join
         allow.cartesian = TRUE, # Fixes error, see details in ?data.table
         nomatch = 0 # Do not include thresholds which no row satisfies (i.e. Dist < 1)
         ]
    passes_threshold[, .(mean_Dist_by_threshold = mean(V1)), by = .(threshold = Dist)]
  },
  uwe1 = {
    dt[.(1:5), on = .(Dist < V1), allow = TRUE, nomatch = 0][
      , .(mean_Dist_by_threshold = mean(V1)), by = .(threshold = Dist)]
  },
  uwe2 = {
    dt[.(1:5), on = .(Dist < V1), nomatch = 0, 
       .(mean_Dist_by_threshold = mean(V1)), by = .EACHI][
         , setnames(.SD, "Dist", "threshold")]
  },
  times = 100L
)

Benchmark results

With 500 rows, there are only slight differences between the 3 variants, with the chaining slightly ahead of Scott's and by = .EACHI behind.

Unit: milliseconds
  expr      min       lq     mean   median       uq       max neval cld
 scott 1.460058 1.506854 1.618048 1.526019 1.726257  4.768493   100  a 
  uwe1 1.302760 1.327686 1.487237 1.338926 1.372498 12.733933   100  a 
  uwe2 1.827756 1.864777 1.944920 1.888349 2.020097  2.233269   100   b

With 50000 rows, chaining is still slightly ahead of Scott's but by = .EACHI has outperformed the others.

Unit: milliseconds
  expr      min       lq     mean   median       uq       max neval cld
 scott 3.692545 3.811466 4.016152 3.826423 3.853489 10.336598   100   b
  uwe1 3.560786 3.632999 3.936583 3.642526 3.657992 13.579008   100   b
  uwe2 2.503508 2.545722 2.577735 2.566869 2.602586  2.798692   100  a

With 5 M rows, this becomes much more evident:

Unit: milliseconds
  expr      min       lq     mean   median       uq      max neval cld
 scott 641.9945 675.3749 743.0761 708.7552 793.6170 878.4787     3   b
  uwe1 587.1724 587.5557 589.1360 587.9391 590.1178 592.2965     3   b
  uwe2 130.9358 134.6688 157.1860 138.4019 170.3110 202.2202     3  a 

One explanation of the speed difference might be the shear size of the intermediate result passes_threshold of more than 10 M rows (this is why allow.cartesian = TRUE is required).

like image 44
Uwe Avatar answered Mar 10 '23 13:03

Uwe