tl;dr: How to dump a perl stack trace when a Perl httpd process runs out of memory.
We've got a mod_perl 2 server, Perl 5.8.8, RHEL 5.6, Linux 2.6.18.
Very occasionally and unpredictably, a child httpd process starts using up all available memory at an alarming rate. We've at least used BSD::Resource::setrlimit(RLIMIT_VMEM, ...), so that the process dies with "Out of memory" before bringing down the server.
We don't know where in the code this is happening, and it's difficult to reproduce without hours of load testing.
What we'd really like is a way to get a Perl stack trace just before the process runs out of memory, so we know what code is causing this. Unfortunately, "Out of memory" is an untrappable error.
Here are the options I'm considering, each with their drawbacks:
1) Use the $^M emergency memory pool. Requires us to recompile perl with -DPERL_EMERGENCY_SBRK and -Dusemymalloc.
2) Put in tons of log statements, then analyze the logs to see where the process is stopping short.
3) Write an external script that constantly scans the pool of httpd processes, and if it sees one using a lot of memory, sends it a USR2 signal (which we've arranged to dump a stack trace).
4) Somehow have the process monitor its own memory continuously, and dump a stack trace when memory gets high but before the "Out of Memory" error.
Thanks!
Jon
You can get a backtrace with mod_backtrace
, see Andy Millar's introduction. The backtrace is on the C level, so you either need
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With