I am soon going to be tasked with doing a proper memory profile of a code that is written in C/C++ and uses CUDA to take advantage of GPU processing.
My initial thoughts would be to create macros and operator overloads that would allow me to track calls to malloc, free, delete, and new calls within my source code. I would just be able to include a different header, and use the __FILE__ and __LINE__
macros to print memory calls to a log file. This type of strategy is found here: http://www.almostinfinite.com/memtrack.html
What is the best way to track that usage in a linked in 3rd party library? I am assuming I'd pretty much only be able to track memory usage before and after the function calls, correct? In my macro/overload scenario, I can simply track the size of the requests to figure out how much memory is being asked for. How would I be able to tell how much the 3rd party lib is using? It is my understanding also, that tracking "free" doesnt really give you any sense of how much the code is using at any particular time, because it is not necessarily returned to the OS. I appreciate any discussion of the matter.
I dont really want to use any memory profiling tools like Totalview or valgrind, because they typically do a lot of other things (bounds checking, etc) that seems to make the software run very slow. Another reason for this is that I want this to be somewhat thread safe - the software uses MPI I believe to spawn processes. I am going to be trying to profile this in real time so I can dump out to log files or something that can be read by another process to visualize memory usage as the software runs. This is also primarily going to be run in a linux environment.
Thanks
Maybe linker option --wrap=symbol can help you. Really good example can be found here: man ld
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With