In my work I use to have several tables (customer details, transaction records, etc). Being some of them are very big (millions of rows), I've recently switched to the data.table
package (thanks Matthew). However, some of them are quite small (few hundreds of rows and 4/5 column) and are called several times. Therefore I started to think about [.data.table
overhead in retrieving data rather then set()ting value as already clearly described in ?set
, where, regardless the size of table one item is set in around 2 microseconds (depending on cpu).
However it doesn't seem to exist the equivalent of set
for getting a value from a data.table
knowing the exact row and column. A sort of loopable [.data.table
.
library(data.table)
library(microbenchmark)
m = matrix(1,nrow=100000,ncol=100)
DF = as.data.frame(m)
DT = as.data.table(m) # same data used in ?set
> microbenchmark(DF[3450,1] , DT[3450, V1], times=1000) # much more overhead in DT
Unit: microseconds
expr min lq median uq max neval
DF[3450, 1] 32.745 36.166 40.5645 43.497 193.533 1000
DT[3450, V1] 788.791 803.453 813.2270 832.287 5826.982 1000
> microbenchmark(DF$V1[3450], DT[3450, 1, with=F], times=1000) # using atomic vector and
# removing part of DT overhead
Unit: microseconds
expr min lq median uq max neval
DF$V1[3450] 2.933 3.910 5.865 6.354 36.166 1000
DT[3450, 1, with = F] 297.629 303.494 305.938 309.359 1878.632 1000
> microbenchmark(DF$V1[3450], DT$V1[3450], times=1000) # using only atomic vectors
Unit: microseconds
expr min lq median uq max neval
DF$V1[3450] 2.933 2.933 3.421 3.422 40.565 1000 # DF seems still a bit faster (23%)
DT$V1[3450] 3.910 3.911 4.399 4.399 16.128 1000
The last method is indeed the best one to fast retrieve a single element several times. However, set
is even faster
> microbenchmark(set(DT,1L,1L,5L), times=1000)
Unit: microseconds
expr min lq median uq max neval
set(DT, 1L, 1L, 5L) 1.955 1.956 2.444 2.444 24.926 1000
the question is: if we can set
a value in 2.444 microseconds shouldn't be possible to get a value in a smaller (or at least similar) amount of time? Thanks.
EDIT: adding two more options as suggested:
> microbenchmark(`[.data.frame`(DT,3450,1), DT[["V1"]][3450], times=1000)
Unit: microseconds
expr min lq median uq max neval
`[.data.frame`(DT, 3450, 1) 46.428 47.895 48.383 48.872 2165.509 1000
DT[["V1"]][3450] 20.038 21.504 23.459 24.437 116.316 1000
which unfortunately are not faster than the previous attempts.
Thanks to @hadley we have the solution!
> microbenchmark(DT$V1[3450], set(DT,1L,1L,5L), .subset2(DT, "V1")[3450], times=1000, unit="us")
Unit: microseconds
expr min lq median uq max neval
DT$V1[3450] 2.566 3.208 3.208 3.528 27.582 1000
set(DT, 1L, 1L, 5L) 1.604 1.925 1.925 2.246 15.074 1000
.subset2(DT, "V1")[3450] 0.000 0.321 0.322 0.642 8.339 1000
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With