Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does the size of the core file reflects the memory usage when the application crashed?

My application(C++ on Sol 10 - 32 bit) crashed, and the size of the core generated by the application is 4 GB. Can I assume the application may use memory up to 4 GB (same as the size of the core file) when it is about to crash? PS. My application is standalone and doesn't depend on any other processes.

Is there any way to check the total memory the application used, with the core file?

like image 737
tune Avatar asked Feb 13 '13 04:02

tune


People also ask

What does core dump do?

A core dump is the printing or the copying to a more permanent medium (such as a hard disk ) the contents of random access memory ( RAM ) at one moment in time. One can think of it as a full-length "snapshot" of RAM. A core dump is taken mainly for the purpose of debugging a program.

Where do core dump files go?

By default, core dumps are sent to systemd-coredump which can be configured in /etc/systemd/coredump. conf . By default, all core dumps are stored in /var/lib/systemd/coredump (due to Storage=external ) and they are compressed with zstd (due to Compress=yes ).

What is core dump in Linux?

A core dump is a file that gets automatically generated by the Linux kernel after a program crashes. This file contains the memory, register values, and the call stack of an application at the point of crashing.


2 Answers

Yes, the core file represent a dump of the whole virtual memory area used by the process when the crash happened. You can't have more than a 4 GB core file with 32 bit processes.

Under Solaris, you can use several commands located in /usr/proc/bin to get information from the core file. In particular:

  • file core : will confirm the core file is from your process
  • pstack core : will tell you where the process crashed
  • pmap core : will show you memory usage per address

You can limit the set of data saved in a core file, among other things, by using the coreadm command. By default everything is saved:
stack + heap + shm + ism + dism + text + data + rodata + anon + shanon + ctf

like image 153
jlliagre Avatar answered Nov 14 '22 22:11

jlliagre


From the manpage (http://linux.die.net/man/5/core):

The default action of certain signals is to cause a process to terminate and produce a core dump file, a disk file containing an image of the process's memory at the time of termination.

Possibly your code is using a multi-threaded environment and shared data.

Also:

Since kernel 2.6.23, the Linux-specific /proc/PID/coredump_filter file can be used to control which memory segments are written to the core dump file in the event that a core dump is performed for the process with the corresponding process ID.

Possibly through this you can get to know the memory used by the application.

like image 31
Anshul Avatar answered Nov 14 '22 22:11

Anshul