I've noticed that RSS (Resident Set Size) of my node.js app is growing over time, and considering I'm having a "JS Object Allocation Failed - Out of Memory" error on my server, it seems a likely cause.
I set up the following very simple Node app:
var express = require('express');
var app = express();
app.get('/',function(req,res,next){
res.end(JSON.stringify(process.memoryUsage()));
});
app.listen(8888);
By simply holding down the "refresh" hotkey @ http:// localhost:8888/ I can watch the RSS/heap/etc. grow, until RSS gets well above 50mb (before I get bored). Waiting a few minutes and coming back, the RSS drops - presumably the GC has run.
I'm trying to figure out if this explains why my actual node app is crashing... my production app quickly hits about 100Mb RSS size, when it crashes it is generally between 200Mb-300Mb. As best as I can tell, this should not be too big (node should be able to handle 1.7Gb or so, I believe), but nonetheless I'm concerned by the fact that the RSS size on my production server trends upwards (falloffs represent crashes):
rss , or resident set size, refers to the amount of space occupied in the main memory for the process, which includes code segment, heap, and stack.
4 KB of 24-bit storage is required for each thread used by the Node. js runtime. The number of threads used is fixed once the Node. js runtime has started, and is typically between 8 and 12, unless you set the UV_THREADPOOL_SIZE environment variable.
js receives a CPU bound task: Whenever a heavy request comes to the event loop, Node. js would set all the CPU available to process it first, and then answer other requests queued. That results in slow processing and overall delay in the event loop, which is why Node. js is not recommended for heavy computation.
This question is quite old already and yet has no answer, so I'll throw in mine, which references a blog post from 2013-2014 by Jay Conrod who has "worked on optimizing the V8 JavaScript engine for mobile phones".
V8 tries to be efficient when collecting garbage and for that it uses Incremental marking and lazy sweeping.
Basically incremental marking is responsible for tracking whether your objects can be collected.
Incremental marking begins when the heap reaches a certain threshold size.
Lazy sweeping is responsible for collecting the objects marked as garbage during incremental marking and performing other time consuming tasks.
Once incremental marking is complete, lazy sweeping begins. All objects have been marked live or dead, and the heap knows exactly how much memory memory could be freed by sweeping. All this memory doesn't necessarily have to be freed up right away though, and delaying the sweeping won't really hurt anything. So rather than sweeping all pages at the same time, the garbage collector sweeps pages on an as-needed basis until all pages have been swept. At that point, the garbage collection cycle is complete, and incremental marking is free to start again.
I think this explains why your server allocates so much memory until it reaches a certain cap. For a better understanding I recommend reading Jay Conrod's blog post "A tour of V8: Garbage Collection".
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With