Recently I've been noticing an increase in the size of the core dumps generated by my application. Initially, they were just around 5MB in size and contained around 5 stack frames, and now I have core dumps of > 2GBs and the information contained within them are no different from the smaller dumps.
Is there any way I can control the size of core dumps generated? Shouldn't they be at least smaller than the application binary itself?
Binaries are compiled in this way:
At the beginning of the application, there's a call to setrlimit
which sets the core limit to infinity -- Is this the problem?
Yes, you can safely delete the core files.
They are also known as memory dump, crash dump, system dump, or ABEND dump. However, core dumps may contain sensitive info—for example, passwords, user data such as PAN, SSN, or encryption keys. Hence, we must disable them on production Linux servers.
Using ulimit to set core file sizes ulimit is a program, included in most Linux distributions, that allows you to specify many file size limits for the shell and all of its subprocesses.
Yes - don't allocate so much memory :-)
The core dump contains the full image of your application's address space, including code, stack and heap (malloc'd objects etc.)
If your core dumps are >2GB, that implies that at some point you allocated that much memory.
You can use setrlimit to set a lower limit on core dump size, at the risk of ending up with a core dump that you can't decode (because it's incomplete).
Yes, setrlimit is why you're getting large core files. You can set the limit on the core size in most shells, e.g. in bash you can do ulimit -c 5000000
. Your setrlimit call will override that, however.
/etc/security/limits.conf can be used to set upper bounds on the core size as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With