Periodically I program sloppily. Ok, I program sloppily all the time, but sometimes that catches up with me in the form of out of memory errors. I start exercising a little discipline in deleting objects with the rm()
command and things get better. I see mixed messages online about whether I should explicitly call gc()
after deleting large data objects. Some say that before R returns a memory error it will run gc()
while others say that manually forcing gc
is a good idea.
Should I run gc()
after deleting large objects in order to ensure maximum memory availability?
"Probably." I do it too, and often even in a loop as in
cleanMem <- function(n=10) { for (i in 1:n) gc() }
Yet that does not, in my experience, restore memory to a pristine state.
So what I usually do is to keep the tasks at hand in script files and execute those using the 'r' frontend (on Unix, and from the 'littler' package). Rscript is an alternative on that other OS.
That workflow happens to agree with
which we covered here before.
From the help page on gc
:
A call of 'gc' causes a garbage collection to take place. This will also take place automatically without user intervention, and the primary purpose of calling 'gc' is for the report on memory usage.
However, it can be useful to call 'gc' after a large object has been removed, as this may prompt R to return memory to the operating system.
So it can be useful to do, but mostly you shouldn't have to. My personal opinion is that it is code of last resort - you shouldn't be littering your code with gc()
statements as a matter of course, but if your machine keeps falling over, and you've tried everything else, then it might be helpful.
By everything else, I mean things like
Writing functions rather than raw scripts, so variables go out of scope.
Emptying your workspace if you go from one problem to another unrelated one.
Discarding data/variables that you aren't interested in. (I frequently receive spreadsheets with dozens of uninteresting columns.)
Supposedly R uses only RAM. That's just not true on a Mac (and I suspect it's not true on Windows either.) If it runs out of RAM, it will start using virtual memory. Sometimes, but not always, processes will 'recognize' that they need to run gc() and free up memory. When they do not do so, you can see this by using the ActivityMonitor.app and seeing that all the RAM is occupied and disk access has jumped up. I find that when I am doing large Cox regression runs that I can avoid spilling over into virtual memory (with slow disk access) by preceding calls with gc(); cph(...)
A bit late to the party, but:
Explicitly calling gc
will free some memory "now". ...so if other processes need the memory, it might be a good idea. For example before calling system
or similar. Or perhaps when you're "done" with the script and R will sit idle for a while until the next job arrives - again, so that other processes get more memory.
If you just want your script to run faster, it won't matter since R will call it later if it needs to. It might even be slower since the normal GC cycle might never have needed to call it.
...but if you want to measure time for instance, it is typically a good idea to do a GC before running your test. This is what system.time
does by default.
UPDATE As @DWin points out, R (or C#, or Java etc) doesn't always know when memory is low and the GC needs to run. So you could sometimes need to do GC as a work-around for deficiencies in the memory system.
No. If there is not enough memory available for an operation, R will run gc()
automatically.
"Maybe." I don't really have a definitive answer. But the help file suggests that there are really only two reasons to call gc():
Since it can slow down a large simulation with repeated calls, I have tended to only do it after removing something large. In other words, I don't think that it makes sense to systematically call it all the time unless you have good reason to.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With