Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Speed up RData load

Tags:

io

r

I've checked several related questions such is this

How to load data quickly into R?

I'm quoting specific part of the most rated answer

It depends on what you want to do and how you process the data further. In any case, loading from a binary R object is always going to be faster, provided you always need the same dataset. The limiting speed here is the speed of your harddrive, not R. The binary form is the internal representation of the dataframe in the workspace, so there is no transformation needed anymore

I really thought that. However, life is about experimenting. I have a 1.22 GB file containing an igraph object. That's said, i don't think what I found here is related to the object class, mainly because you can load('file.RData') even before you call "library".

Disks in this server are pretty cool. As you can check in the reading time to memory

user@machine data$ pv mygraph.RData > /dev/null 1.22GB 0:00:03 [ 384MB/s] [==================================>] 100% ` 

However when I load this data from R

>system.time(load('mygraph.RData'))    user  system   elapsed  178.533  16.490   202.662 

So it seems loading *.RData files is 60 times slower than disk limits, which should mean R actually does something while "load".

I've got the same feeling using differentes R versions with different hardware, it's just this time I got patience to make benchmarking (mainly because with such a cool disk storage, it was terrible how long the load actually takes)

Any ideas on how to overcome this?


After ideas in answers

save(g,file="test.RData",compress=F) 

Now the file is 3.1GB against 1.22GB before. In my case, loading uncompress is a bit faster (disk is not my bottleneck by far)

> system.time(load('test.RData')) user  system elapsed  126.254   2.701 128.974  

Reading the uncompressed file to memory takes like 12 seconds, so I confirm most the time is spent in setting the enviroment

I'll be back with RDS results, sounds like interesting


Here we are, as prommised

system.time(saveRDS(g,file="test2.RData",compress=F)) user  system elapsed  7.714   2.820  18.112  

And I get a 3.1GB just like "save" uncompressed, although md5sum is different, probably because save also stores the object name

Now reading...

> system.time(a<-readRDS('test2.RData')) user  system elapsed  41.902   2.166  44.077  

So combining both ideas (uncompress and RDS) runs 5 times faster. Thanks for your contributions!

like image 592
cyague Avatar asked Jul 19 '12 11:07

cyague


People also ask

What is the difference between RDA and RData?

RData files are used to store multiple R objects within a single file. Rda is a short form of RData. To save multiple objects into RData file, we use the save() function. To save all the objects in the workspace, we use the save.

How do I load an RData file?

RData file has the original data plus any changes that you made. The easiest way to load the data into R is to double-click on the particular file yourfile. RData after you download it to your computer.

Is RDS the same as RData?

Rdata is used to save multiple R objects, while Rds is used to save a single R object.

What file type is RData?

The RData format (usually with extension . rdata or . rda) is a format designed for use with R, a system for statistical computation and related graphics, for storing a complete R workspace or selected "objects" from a workspace in a form that can be loaded back by R.


2 Answers

save compresses by default, so it takes extra time to uncompress the file. Then it takes a bit longer to load the larger file into memory. Your pv example is just copying the compressed data to memory, which isn't very useful to you. ;-)

UPDATE:

I tested my theory and it was incorrect (at least on my Windows XP machine with 3.3Ghz CPU and 7200RPM HDD). Loading compressed files is faster (probably because it reduces disk I/O).

The extra time is spent in RestoreToEnv (in saveload.c) and/or R_Unserialize (in serialize.c). So you could make loading faster by changing those files, or maybe by using saveRDS to individually save the objects in myGraph.RData then somehow using loadRDS across multiple R processes to load the data into shared memory...

like image 174
Joshua Ulrich Avatar answered Sep 30 '22 13:09

Joshua Ulrich


For variables that big, I suspect that most of the time is taken up inside the internal C code (http://svn.r-project.org/R/trunk/src/main/saveload.c). You can run some profiling to see if I'm right. (All the R code in the load function does is check that your file is non-empty and hasn't been corrupted.

As well as reading the variables into memory, they (amongst other things) need to be stored inside an R environment.

The only obvious way of getting a big speedup in loading variables would be to rewrite the code in a parallel way to allow simultaneous loading of variables. This presumably requires a substantial rewrite of R's internals, so don't hold your breath for such a feature.

like image 45
Richie Cotton Avatar answered Sep 30 '22 13:09

Richie Cotton