What's the best way to get the following vector into a data.table along with the index of each number?
nVector <- c("20 37", "38 23", "39 48", "45 76", "65 44", "86 95 80")
Here is what I have, is there a better way?
vLength <- unname(sapply(nVector, function(x) length(unlist( strsplit(x, " "))) ))
vSeq <- seq(1, length(vLength))
vPosition <- vector("list", length(vLength))
for(i in 1:length(vLength))
{
vPosition[[i]] <- rep(vSeq[i], vLength[i])
}
DT <- data.table(index = unlist(vPosition),
value = unlist(strsplit(nVector, " ")))
Note: The length of the vector is about 125k, and there are ~ 20 million values.
Why not simply:
data.table(v1 = nVector)[, index := .I][, list(unlist(strsplit(v1, " "))), by = index]
## index V1
## 1: 1 20
## 2: 1 37
## 3: 2 38
## 4: 2 23
## 5: 3 39
## 6: 3 48
## 7: 4 45
## 8: 4 76
## 9: 5 65
## 10: 5 44
## 11: 6 86
## 12: 6 95
## 13: 6 80
Alternatively, you can create a function like the following (using a function is more for convenience of reuse--not necessary if its just a one-time problem):
fun <- function(invec) {
x <- strsplit(invec, " ", TRUE)
data.table(index = rep(seq_along(x), lengths(x)), V1 = unlist(x, use.names = FALSE))
}
fun(nVector)
Note that the use of fixed = TRUE
will give you a good speed boost--so you should consider that even in the "data.table" approach.
Finally, as @Jaap suggests, you can use cSplit
from my "splitstackshape" package, like this:
library(splitstackshape)
cSplit(data.table(v1 = nVector)[, index := .I], "v1", sep = " ", direction = "long")
Since performance is an issue considering the size of your data, you might want to use the fun
approach, where you manually create the "data.table".
Here's some timing on a larger version of your vector:
NVector <- rep(nVector, 10000)
length(NVector)
# [1] 60000
f1 <- function(invec) {
data.table(v1 = invec)[, index := .I][
, list(unlist(strsplit(v1, " ", TRUE))), by = index]
}
f2 <- function(invec) {
cSplit(data.table(v1 = invec)[, index := .I],
"v1", sep = " ", direction = "long")
}
library(microbenchmark)
microbenchmark(fun(NVector), f1(NVector), f2(NVector), times = 50)
# Unit: milliseconds
# expr min lq mean median uq max neval
# fun(NVector) 13.26559 13.70738 15.89918 14.12573 15.11083 50.84675 50
# f1(NVector) 196.95570 207.60004 223.74729 212.49649 224.78725 378.51007 50
# f2(NVector) 167.38512 176.16370 196.28389 183.96098 202.00187 412.71760 50
The performance of any of these is likely to depend at least in part on the number of resulting split pieces, so I thought I would:
fun
using stringi
Here is the new sample data:
library(stringi)
set.seed(2)
NVec2 <- vapply(sample(20, 60000, TRUE),
function(x) paste(stri_rand_strings(x, 5, "[0-9]"), collapse = " "),
character(1L))
length(NVec2)
# [1] 60000
Here are the new functions:
## like `fun`, but using `stri_split_fixed`
fun_stringi <- function(invec) {
x <- stri_split_fixed(invec, " ")
data.table(index = rep(seq_along(x), lengths(x)), V1 = unlist(x, use.names = FALSE))
}
## A base R alternative
f3 <- function(invec) stack(setNames(strsplit(invec, " ", TRUE), seq_along(invec)))
## A tidyverse approach
f4 <- function(invec) {
data_frame(ind = seq_along(invec),
val = stri_split_fixed(invec, " ")) %>%
unnest()
}
And the new benchmarking:
library(microbenchmark)
res <- microbenchmark(base = fun(NVec2), stringi = fun_stringi(NVec2),
data_table = f1(NVec2), splitstackshape = f2(NVec2),
base_alt = f3(NVec2), tidyverse = f4(NVec2), times = 50)
res
# Unit: milliseconds
# expr min lq mean median uq max neval
# base 162.6149 174.7311 204.0177 187.3446 213.7267 443.8357 50
# stringi 146.8655 157.6717 187.1125 168.5383 192.1952 394.1169 50
# data_table 360.0788 382.9118 427.2276 396.0421 418.1821 598.3754 50
# splitstackshape 542.8882 578.6317 619.9677 598.5113 626.5734 901.9400 50
# base_alt 259.2847 293.7944 325.6021 310.7322 339.1613 492.4644 50
# tidyverse 500.1571 519.4765 545.4757 534.1167 549.4756 713.3711 50
Increasing the data further to get closer to simulating your actual dataset, performance really starts to converge--with the exception of "splitstackshape", which slows down horribly :-(
Here's a sample:
library(stringi)
set.seed(2)
NVec3 <- vapply(sample(100:200, 125000, TRUE),
function(x) paste(stri_rand_strings(x, 5, "[0-9]"), collapse = " "),
character(1L))
system.time({out <- f2(NVec3)})
# user system elapsed
# 20.89 0.03 20.94
## Similar to your actual data
length(NVec3)
# [1] 125000
nrow(out)
# [1] 18767938
res <- microbenchmark(base = fun(NVec3), stringi = fun_stringi(NVec3),
data_table = f1(NVec3), base_alt = f3(NVec3),
tidyverse = f4(NVec3), times = 20)
res
## Unit: seconds
## expr min lq mean median uq max neval
## base 4.967281 5.606208 5.983120 5.978414 6.345823 7.189997 20
## stringi 4.888080 5.292926 5.811898 5.728464 6.091029 7.923210 20
## data_table 5.625772 5.861431 6.244174 6.092079 6.420082 7.698534 20
## base_alt 4.635496 5.015382 5.564661 5.486531 6.090838 7.034357 20
## tidyverse 5.634781 6.186927 6.717203 6.613003 7.198013 8.154297 20
autoplot(res, log = FALSE)
At this point, and assuming that you're unlikely to be doing this repeatedly, any of the options should be pretty decent to actually use. I was personally surprised that stack
outperformed all of the other options....
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With