I have very large tables (30 million rows) that I would like to load as a dataframes in R. read.table()
has a lot of convenient features, but it seems like there is a lot of logic in the implementation that would slow things down. In my case, I am assuming I know the types of the columns ahead of time, the table does not contain any column headers or row names, and does not have any pathological characters that I have to worry about.
I know that reading in a table as a list using scan()
can be quite fast, e.g.:
datalist <- scan('myfile',sep='\t',list(url='',popularity=0,mintime=0,maxtime=0)))
But some of my attempts to convert this to a dataframe appear to decrease the performance of the above by a factor of 6:
df <- as.data.frame(scan('myfile',sep='\t',list(url='',popularity=0,mintime=0,maxtime=0))))
Is there a better way of doing this? Or quite possibly completely different approach to the problem?
For the size of DF data. table is slower, for the size of DF2 it is faster.
Loading a large dataset: use fread() or functions from readr instead of read. xxx() . If you really need to read an entire csv in memory, by default, R users use the read. table method or variations thereof (such as read.
Not only was fread() almost 2.5 times faster than readr's functionality in reading and binding the data, but perhaps even more importantly, the maximum used memory was only 15.25 GB, compared to readr's 27 GB. Interestingly, even though very slow, base R also spent less memory than the tidyverse suite.
If the CSV files are extremely large, the best way to import into R is using the fread() method from the data. table package. The output of the data will be in the form of Data table in this case.
An update, several years later
This answer is old, and R has moved on. Tweaking read.table
to run a bit faster has precious little benefit. Your options are:
Using vroom
from the tidyverse package vroom
for importing data from csv/tab-delimited files directly into an R tibble. See Hector's answer.
Using fread
in data.table
for importing data from csv/tab-delimited files directly into R. See mnel's answer.
Using read_table
in readr
(on CRAN from April 2015). This works much like fread
above. The readme in the link explains the difference between the two functions (readr
currently claims to be "1.5-2x slower" than data.table::fread
).
read.csv.raw
from iotools
provides a third option for quickly reading CSV files.
Trying to store as much data as you can in databases rather than flat files. (As well as being a better permanent storage medium, data is passed to and from R in a binary format, which is faster.) read.csv.sql
in the sqldf
package, as described in JD Long's answer, imports data into a temporary SQLite database and then reads it into R. See also: the RODBC
package, and the reverse depends section of the DBI
package page. MonetDB.R
gives you a data type that pretends to be a data frame but is really a MonetDB underneath, increasing performance. Import data with its monetdb.read.csv
function. dplyr
allows you to work directly with data stored in several types of database.
Storing data in binary formats can also be useful for improving performance. Use saveRDS
/readRDS
(see below), the h5
or rhdf5
packages for HDF5 format, or write_fst
/read_fst
from the fst
package.
The original answer
There are a couple of simple things to try, whether you use read.table or scan.
Set nrows
=the number of records in your data (nmax
in scan
).
Make sure that comment.char=""
to turn off interpretation of comments.
Explicitly define the classes of each column using colClasses
in read.table
.
Setting multi.line=FALSE
may also improve performance in scan.
If none of these thing work, then use one of the profiling packages to determine which lines are slowing things down. Perhaps you can write a cut down version of read.table
based on the results.
The other alternative is filtering your data before you read it into R.
Or, if the problem is that you have to read it in regularly, then use these methods to read the data in once, then save the data frame as a binary blob with save
saveRDS
, then next time you can retrieve it faster with load
readRDS
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With