What is the best way to perform the following kinds of operations:
Let's say I want to find out the index of all rows that have a user_id <0 or even say find the user_id's that have average sales >50.
I first generate the data table with the index and then filter with what is essentially another data.table. Is there a more efficient way?
In this example, I want to have the index 2 i.e the index of the row where user_id is <0
x<-data.table(user_id=c(1,-1,2,3),iqlevel=c(40,50,60,70))
x[,I:=.I,][user_id<0,I,]
Here you go:
dt[,.I[which(user_id<0)] ]
I would have another answer:
x[user_id<0, which=TRUE]
And run some timing using my helper tool:
library(data.table) # v1.9.5
# devtools::install_github("jangorecki/dtq")
# drat::addRepo("jangorecki"); install.packages("dtq")
library(dtq)
op <- data.table(user_id=c(1,-1,2,3),iqlevel=c(40,50,60,70))
sheffien <- copy(op)
jan <- copy(op)
r1 <- op[,I:=.I,][user_id<0,I,]
r2 <- sheffien[,.I[which(user_id<0)] ]
r3 <- jan[user_id<0, which=TRUE]
identical(r1,r2)
# [1] TRUE
identical(r2,r3)
# [1] TRUE
dtcalls <- dtl(print=TRUE) # collect logs
print(dtcalls)
# seq dtq_id dtq_seq src query timestamp env elapsed in_rows out_rows
# 1: 1 1 1 op [j = `:=`(I, .I)] 2015-08-07 14:01:10 R_GlobalEnv 0.001718847 4 4
# 2: 2 1 2 op [i = user_id < 0, j = I] 2015-08-07 14:01:10 R_GlobalEnv 0.008719112 4 NA
# 3: 3 2 1 sheffien [j = .I[which(user_id < 0)]] 2015-08-07 14:01:10 R_GlobalEnv 0.000662418 4 NA
# 4: 4 3 1 jan [i = user_id < 0, which = TRUE] 2015-08-07 14:01:10 R_GlobalEnv 0.000281067 4 NA
dtcalls[,.(query=paste(query, collapse=""), sec=sum(elapsed)),.(src)]
# src query sec
# 1: op [j = `:=`(I, .I)][i = user_id < 0, j = I] 0.010437959
# 2: sheffien [j = .I[which(user_id < 0)]] 0.000662418
# 3: jan [i = user_id < 0, which = TRUE] 0.000281067
Keep in mind this is just single run on 4 rows data.table, so be sure to validate the speed on your data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With