Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

R data.table fread select with a partial string match?

I would like to use fread to pull in only columns with names that match a condition. (In this case, I would like to pull in all columns that contain the label email.) Imagine that you have this data in a file called tempdata.txt in your working directory:

col1,col2,col3,email1,email2,col4,url1,url2,col5
1,2,3,4,5,6,7,8,9
9,8,7,6,5,4,3,2,1
x,x,x,[email protected],[email protected],y,y,y,y
a,a,a,a,a,a,http://google.com,http://stackoverflow.com,a

It is possible to use fread to load a subset of the columns if you know the names:

test <- data.table::fread("tempdata.txt", select=c("email1","email2"))

> test
      email1      email2
1:         4           5
2:         6           5
3: [email protected] [email protected]
4:         a           a

Is it also possible to select using a string match? I am trying to mimic this behavior but within the fread command:

> all <- data.table::fread("tempdata.txt")
> all %>% select(contains("email"))
      email1      email2
1:         4           5
2:         6           5
3: [email protected] [email protected]
4:         a           a

Thanks for any insight.

like image 267
mysteRious Avatar asked Mar 02 '23 09:03

mysteRious


2 Answers

I don't know that fread doesn't have that capability (though I don't see it in the docs). However, a relatively inexpensive approach would be to read the first row or two, get the column names, grep them, and move on from there.

library(data.table)
fwrite(data.table(a=1:2, email1=c('a','b'), snailmail=c('c','d'), email2=c('e','f')), "test.csv")
fread("test.csv", nrows=1)
#    a email1 snailmail email2
# 1: 1      a         c      e
cols <- colnames(fread("test.csv", nrows=0))
cols
# [1] "a"         "email1"    "snailmail" "email2"   
fread("test.csv", select = grep("^email", cols, value = TRUE))
#    email1 email2
# 1:      a      e
# 2:      b      f
like image 80
r2evans Avatar answered Mar 05 '23 00:03

r2evans


An alternative for when your data is very clean is to use readLines:

colnames = strsplit(readLines('test.csv', 1L), ',', fixed=TRUE)[[1L]]

This will be faster as fread does come with some overhead:

microbenchmark::microbenchmark(
  times = 1e5,
  fread = fread("test.csv", nrows=0L),
  fread_optim = fread('test.csv', nrows=0L, sep=',', header=TRUE),
  read_csv = read.csv('test.csv', nrows=1L),
  strsplit = strsplit(readLines('test.csv', n=1L), ',', fixed=TRUE)[[1L]],
  scan = scan('test.csv', character(), nlines=1L, sep=',', quiet=TRUE)
)
# Unit: microseconds
#         expr     min      lq      mean   median       uq         max neval
#        fread 224.128 252.349 303.55132 270.4815 305.0580   62815.127 1e+05
#  fread_optim 224.410 253.128 378.10699 271.3815 306.3630 7451270.616 1e+05
#     read_csv 256.298 295.847 348.54183 316.1290 356.0520   46047.083 1e+05
#     strsplit  36.026  47.563  60.13347  55.3050  61.7490    6344.610 1e+05
#         scan  42.121  56.584  69.75281  63.5750  71.4535    6497.283 1e+05

Note of course that the scale is microseconds on everything so it won't matter much for a simple use case.

I say "when your data is very clean" because fread would (should) also work in scenarios where your data is a bit dirtier, or if you don't know the sep in advance, etc.

like image 32
MichaelChirico Avatar answered Mar 05 '23 00:03

MichaelChirico