calling unique
on a keyed data.table
you'll have unique lines per each group. In case of duplicated lines the first will be taken. When I need the take the last instead ( in general the last temporal transaction) I use .SD[.N]
library(data.table)
library(microbenchmark)
dt <- data.table(id=sample(letters, 10000, T), var=rnorm(10000), key="id")
microbenchmark(unique(dt), dt[, .SD[.N], by=id])
Unit: microseconds
expr min lq median uq max neval
unique(dt) 570.882 586.1155 595.8975 608.406 3209.122 100
dt[, .SD[.N], by = id] 6532.739 6637.7745 6694.3820 6776.968 208264.433 100
do you know a faster way to do the same?
Create a data.table
that contains the unique combinations of the key variables then join using mult = 'last'
Using .SD
is convenient, but slow. You could use .I
instead if you wished.
dtu <- unique(dt)[,key(dt), with = FALSE]
dt[dtu, mult = 'last']
Or
dt[ dt[, .I[.N], by = key(dt)]$V1]
From data.table
v1.9.4, you can use fromLast = TRUE
.
microbenchmark(unique(dt, by = "id"),
dt[, .SD[.N], by=id],
unique(dt, by = "id", fromLast = TRUE))
Unit: microseconds
expr min lq mean median uq max neval cld
unique(dt, by = "id") 333.978 355.1900 406.1585 371.1360 393.4015 3203.769 100 a
dt[, .SD[.N], by = id] 519.320 541.4345 580.2176 553.6200 563.5490 2690.167 100 b
unique(dt, by = "id", fromLast = TRUE) 338.190 366.4725 430.1296 380.9145 400.7730 4774.663 100 a
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With