Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to avoid fast memory increase during scavenge gc?

I have an application built on restify. I have no memory leaks, however I have big memory growth during scavenge gc, then comes heavy weight mark-sweep gc and cleans the memory.

It affects performance of my application.

[2268]   266859 ms: Scavenge 61.5 (119.5) -> 46.0 (119.5) MB, 2.2 ms [allocation failure].
[2268]   267084 ms: Scavenge 63.7 (119.5) -> 48.3 (119.5) MB, 6.2 ms [allocation failure].
[2268]   267289 ms: Scavenge 66.0 (119.5) -> 50.6 (119.5) MB, 2.6 ms [allocation failure].
[2268]   267504 ms: Scavenge 68.3 (119.5) -> 52.8 (119.5) MB, 2.4 ms [allocation failure].
[2268]   267700 ms: Scavenge 70.5 (119.5) -> 55.1 (119.5) MB, 2.7 ms [allocation failure].
....

[2268]   275913 ms: Scavenge 154.2 (183.5) -> 138.8 (183.5) MB, 2.4 ms [allocation failure].
[2268]   276161 ms: Scavenge 157.5 (185.5) -> 142.1 (185.5) MB, 2.7 ms (+ 2.4 ms in 18 steps since last GC) [allocation failure].
[2268]   276427 ms: Scavenge 159.8 (187.5) -> 144.3 (187.5) MB, 2.5 ms (+ 36.3 ms in 236 steps since last GC) [allocation failure].
[2268]   276494 ms: Mark-sweep 147.7 (188.5) -> 43.7 (121.5) MB, 20.1 ms (+ 45.1 ms in 298 steps since start of marking, biggest step 0.5 ms) [GC interrupt] [GC in old space requested].

This type of behavior happens when I try to access non-existing url

ab -c 100 -n 10000000 -k http://localhost:1337/invalid/url

I can't really use node-inspector to track what causes such intense memory growth because it will request full gc before taking heap snapshot.

What are my options to track what causes such a rapid memory growth?

How to find out which objects survive scavenges but don't survive mark-sweep gc?

Thanks,

UPDATE 1 So there is no way to see middle aged scavenged content. Here is the hint, if you see fast memory increase during scavenge but then it suddenly drops with mark&sweep then it's mean that your code creates data in large space. Long stack traces for example. Restify generates gigantic stack traces which should be disabled in production.

like image 966
Vlad Miller Avatar asked Aug 28 '15 12:08

Vlad Miller


People also ask

What causes memory leaks in node?

Properly Using Closures, Timers, and Event Handlers Closures, timers, and event handlers can often create memory leaks in Node.

What is allocation failure in GC?

A GC allocation failure means that the garbage collector could not move objects from young gen to old gen fast enough because it does not have enough memory in old gen. This can cause application slowness.

How do I increase my node memory limit?

If you want to increase the max memory for Node you can use --max_old_space_size option. You should set this with NODE_OPTIONS environment variable.

What is scavenge GC?

Scavenge operations occur only with the gencon garbage collection policy. A scavenge operation runs when the allocate space within the nursery area is filled. During a scavenge, reachable objects are copied either into the survivor space within the nursery, or into the tenure space if they have reached the tenure age.


1 Answers

You might try running your Node script with the option -–expose-gc:

node --expose-gc script.js

This allows to trigger the garbage collection manually from within JS:

global.gc();

When garbage collection is enforced manually, you can apply a multiple snapshot technique:

  • take one Snap before, one Snap after GC
  • then apply optimization
  • then one Snap before, one Snap after GC

The snapshots allow to track what causes the memory growth. The goal is to have a better result of the second "Snap after GC", when compared to the first "Snap after GC".

like image 92
Jens A. Koch Avatar answered Nov 12 '22 00:11

Jens A. Koch