Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Subset multiple columns by .N in large data.table

Tags:

r

data.table

I have a large data.table that contains millions of rows and 30 columns. The columns contain a varying number of categorical features. I would like to remove any features that occur less than a certain proportion.

Here is an example:

dt <- data.table(id=1:18,col1=c(rep("a",5), rep("b",10), rep("c",3)), col2=c(rep("d",12),rep("e",5),"f"))
dt
    id col1 col2
 1:  1    a    d
 2:  2    a    d
 3:  3    a    d
 4:  4    a    d
 5:  5    a    d
 6:  6    b    d
 7:  7    b    d
 8:  8    b    d
 9:  9    b    d
10: 10    b    d
11: 11    b    d
12: 12    b    d
13: 13    b    e
14: 14    b    e
15: 15    b    e
16: 16    c    e
17: 17    c    e
18: 18    c    f

For example, I only want to keep features that occur more than a proportion of 0.5 per column:

> dt[,.N,by=col1][N/sum(N)>0.5]
   col1  N
1:    b 10

and

> dt[,.N,by=col2][N/sum(N)>0.5]
   col2  N
1:    d 12

The way I approached this is to loop over the columns and use %in%

for (i in 1:2) dt[, paste0('newcol',i) :=lapply(dt[[paste0('col',i)]],
 function(y) ifelse(y %in% dt[,.N,by=dt[[paste0('col',i)]]][N/sum(N)>0.5][[1]],y,"") )]

and then I create a new column that holds the merged values dt[, merge := paste(newcol1,newcol2), by=id] which gets me my desired output in column merge:

> dt
    id col1 col2 newcol1 newcol2 merge
 1:  1    a    d               d     d
 2:  2    a    d               d     d
 3:  3    a    d               d     d
 4:  4    a    d               d     d
 5:  5    a    d               d     d
 6:  6    b    d       b       d   b d
 7:  7    b    d       b       d   b d
 8:  8    b    d       b       d   b d
 9:  9    b    d       b       d   b d
10: 10    b    d       b       d   b d
11: 11    b    d       b       d   b d
12: 12    b    d       b       d   b d
13: 13    b    e       b            b 
14: 14    b    e       b            b 
15: 15    b    e       b            b 
16: 16    c    e                      
17: 17    c    e                      
18: 18    c    f         

The trouble is that this is really slow on a large data set. I suspect that I am not approaching this in "data.table-y" way. I also have to be very careful about not copying the original dataset because it barely fits into my RAM, which is why data.table is so appealing in the first place. I don't care however if there any intermediate steps as long as the process is quicker.

> sessionInfo()
R version 3.0.2 (2013-09-25)
Platform: x86_64-apple-darwin10.8.0 (64-bit)

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] data.table_1.9.2

loaded via a namespace (and not attached):
[1] plyr_1.8.1    Rcpp_0.11.2   reshape2_1.4  stringr_0.6.2 tools_3.0.2 
like image 369
Michael Whitaker Avatar asked Dec 07 '25 21:12

Michael Whitaker


1 Answers

Hopefully I've gotten it right this time. Too much time wasted on wrong answers!

cols = paste("col", 1:2, sep="")
rows = nrow(dt)
for (i in seq_along(cols)) {
    dt[, (cols[i]) := if (.N/rows <= .5) "", by=c(cols[i])]
}
dt[, merge := do.call(paste, c(as.list(dt)[-1L], sep= " "))]

Here's a benchmark on 1e6 * 30 columns:

set.seed(1L)
dt = setDT(lapply(1:30, function(x) sample(letters[1:4], 1e6, TRUE)))
system.time({
    cols = paste("V", 1:30, sep="")
    rows = nrow(dt)
    for (i in seq_along(cols)) {
        dt[, (cols[i]) := if (.N/rows <= .5) "", by=c(cols[i])]
    }
    dt[, merge := do.call(paste, c(as.list(dt)[-1L], sep= " "))]
})
#  user  system elapsed 
# 4.880   0.086   5.095 

I'll let you pickup any further optimisations possible. Good luck!

like image 108
Arun Avatar answered Dec 10 '25 11:12

Arun



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!