My goal altogether is to figure out from a post mortem core file, why a specific process is consuming a lot of memory. Is there a summary that I can get somehow? As obvious valgrind is out of the question, because I can't get access to the process live.
First of all getting an output something similar to /proc/"pid"/maps, would help, but
maintenance info sections
(as described here: GDB: Listing all mapped memory regions for a crashed process) in gdb didn't show me heap memory consumption.
info proc map
is an option, as I can get access to machine with the exact same code, but as far as I have seen it is not correct. My process was using 700MB-s, but the maps seen only accounted for some 10 MBs. And I didn't see .so-s there which are visible in
maintenance print statistics
Do you know any other command which might be useful?
I can always instrument the code, but that's no easy. Along with reaching all the allocated data through pointers is like needle in the haystack.
Do you have any ideas?
Hook up your program under GDB and Valgrind. Put a break at where you think the memory is lost. Continue there and run a leak check. If there is no leak yet, slowly proceed forward, doing a leak check after each step or at every new breakpoint.
having said that, you can get an idea by looking at the size of the core dump. You can generate multiple dumps say, one after initial run and one after prolonged run and if you see a vast difference in size, one can guess there could be something going wrong. But again, the memory could be used for productive purposes.
You run memleax to monitor the target process, wait for the real-time memory leak report, and then kill it (e.g. by Ctrl-C) to stop monitoring. memleax follows new threads, but not forked processes. If you want to debug multiple processes, just run multiple memleax.
Postmortem debugging of this sort in gdb is a bit of an art more than a science.
The most important tool for it, in my opinion, is the ability to write scripts that run inside of gdb. The manual will explain it to you. The reason I find this so useful is that it lets you do things like walking data structures and printing out information abou them.
Another possibility for you here is to instrument your version of malloc -- write a new malloc function that saves statistics about what is being allocated so that you can then look at those post mortem. You can, of course, call the original malloc to do the actual memory allocation work.
I'm sorry that I can't give you an obvious and simple answer that will simply yield an immediate fix for you here -- without tools like valgrind this is a very hard job.
If its Linux you dont have to worry about doing stats to your malloc. Use the utility called 'memusage'
for a sample program (sample_mem.c) like below
#include<stdio.h>
#include<stdlib.h>
#include<time.h>
int main(voiid)
{
int i=1000;
char *buff=NULL;
srand(time(NULL));
while(i--)
{
buff = malloc(rand() % 64);
free(buff);
}
return 0;
}
the output of memusage will be
$memusage sample_mem
Memory usage summary: heap total: 31434, heap peak: 63, stack peak: 80
total calls total memory failed calls
malloc| 1000 31434 0
realloc| 0 0 0 (nomove:0, dec:0, free:0)
calloc| 0 0 0
free| 1000 31434
Histogram for block sizes:
0-15 253 25% ==================================================
16-31 253 25% ==================================================
32-47 247 24% ================================================
48-63 247 24% ================================================
but if your writing a malloc wapper then you can make your program coredump after this many number of malloc so that you can get a clue.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With