Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I keep Perl from consuming tons of memory when child forks of a large parent process shut down?

Context:

I have a multi-forking Perl (5.16) process that runs on Linux. The parent fork loads a very large amount of Perl code (via use/require) and allocates lots of data structures (several GB). It then creates many child forks, all of whom work in parallel. This is done to reduce the memory footprint of the process while it runs, since the copy-on-write nature of fork() means that the children can use the data that the parent has without each maintaining their own large memory image.

Problem:

All of that works fine until I try to shut down the group of processes. When I interrupt the parent (the signal propagates to all of the children), the memory on the server running the code immediately fills up, it starts swapping, and other processes on the server grind to a halt. When a copy-on-write fork shuts down, Perl seems to be trying to re-allocate all of the memory claimed in the parent so it can flag it for free or something.

Question:

How do I prevent this bloat-on-shutdown from happening? Is there some way I can tell the child forks to only try to traverse-and-reclaim memory that those forks allocated?

like image 851
Zac B Avatar asked Mar 18 '15 15:03

Zac B


2 Answers

The allocation of memory pages is due to the deallocation of variables on exit. This is necessary to have destructors called.

Calling POSIX::_exit() will exit immediately, skipping the per-variable deallocation, but also skipping calls of destructors.

like image 187
ikegami Avatar answered Sep 25 '22 21:09

ikegami


I accepted @ikegami's answer, because it directly answers the question.

I'm posting this because my "solution" (really a way to optimize part of the problem away) might be useful to others.

The eventual fix in my case was a paradigm shift: I realized that the problem wasn't that any Perl process sucks up a lot of memory on fork shutdown, but that I had so many Perl processes sucking up memory at shutdown at the same time.

When my parent process got the "shutdown" instruction, it immediately sent a "shutdown" message to all of its children, and they all finished up what they were doing and shut down at more or less the same time. With anywhere from dozens to hundreds of child processes shutting down at the same time, the memory overhead was too great.

The fix was to make shutdown a two-phase process: first, the parent process sent a "stop what you're doing" message to all of its children so that business logic stopped running at a predictable time. It sent that message to all of the children at once/in a very quick loop. Then, it shut down the children one at a time. It issued an interrupt to each child, called waitpid on it until it finished, and then went on to the next one.

This way, the worst-case shutdown-induced memory footprint (with p representing the pre-fork memory footprint and f representing the number of child forks) was 2p, rather than fp.

This solution will not work in cases where 2p memory consumption is still an unacceptably-high cost.

Two optimizations were added: timeout/forceful-kill checks in the case of stubborn/broken children, and conditional sleeps between child-shutdowns if the previous child's shutdown forced the system to start swapping. The sleeps gave the system time to breathe/get pages out of swap.

Again, this is an optimization on the problem, not an answer. The real answer is @ikegami's.

like image 20
Zac B Avatar answered Sep 24 '22 21:09

Zac B