Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Linux core dumps are too large!

Tags:

linux

coredump

Recently I've been noticing an increase in the size of the core dumps generated by my application. Initially, they were just around 5MB in size and contained around 5 stack frames, and now I have core dumps of > 2GBs and the information contained within them are no different from the smaller dumps.

Is there any way I can control the size of core dumps generated? Shouldn't they be at least smaller than the application binary itself?

Binaries are compiled in this way:

  • Compiled in release mode with debug symbols (ie, -g compiler option in GCC).
  • Debug symbols are copied onto a
    separate file and stripped from the
    binary.
  • A GNU debug symbols link is added to the binary.

At the beginning of the application, there's a call to setrlimit which sets the core limit to infinity -- Is this the problem?

like image 811
themoondothshine Avatar asked May 04 '10 04:05

themoondothshine


People also ask

Can I delete core dump files Linux?

Yes, you can safely delete the core files.

Should I disable core dumps?

They are also known as memory dump, crash dump, system dump, or ABEND dump. However, core dumps may contain sensitive info—for example, passwords, user data such as PAN, SSN, or encryption keys. Hence, we must disable them on production Linux servers.

Which Linux command helps you to set the maximum size of core dumps created?

Using ulimit to set core file sizes ulimit is a program, included in most Linux distributions, that allows you to specify many file size limits for the shell and all of its subprocesses.


2 Answers

Yes - don't allocate so much memory :-)

The core dump contains the full image of your application's address space, including code, stack and heap (malloc'd objects etc.)

If your core dumps are >2GB, that implies that at some point you allocated that much memory.

You can use setrlimit to set a lower limit on core dump size, at the risk of ending up with a core dump that you can't decode (because it's incomplete).

like image 130
David Gelhar Avatar answered Nov 05 '22 03:11

David Gelhar


Yes, setrlimit is why you're getting large core files. You can set the limit on the core size in most shells, e.g. in bash you can do ulimit -c 5000000. Your setrlimit call will override that, however.

/etc/security/limits.conf can be used to set upper bounds on the core size as well.

like image 42
Chris AtLee Avatar answered Nov 05 '22 04:11

Chris AtLee