I've already found the following question, but I was wondering if there was a quicker and dirtier way of grabbing an estimate of how much memory the python interpreter is currently using for my script that doesn't rely on external libraries.
I'm coming from PHP and used to use memory_get_usage() and memory_get_peak_usage() a lot for this purpose and I was hoping to find an equivalent.
You can use it by putting the @profile decorator around any function or method and running python -m memory_profiler myscript. You'll see line-by-line memory usage once your script exits.
Python has a pymalloc allocator optimized for small objects (smaller or equal to 512 bytes) with a short lifetime. It uses memory mappings called “arenas” with a fixed size of 256 KiB.
A simple solution for Linux and other systems with /proc/self/status
is the following code, which I use in a project of mine:
def memory_usage(): """Memory usage of the current process in kilobytes.""" status = None result = {'peak': 0, 'rss': 0} try: # This will only work on systems with a /proc file system # (like Linux). status = open('/proc/self/status') for line in status: parts = line.split() key = parts[0][2:-1].lower() if key in result: result[key] = int(parts[1]) finally: if status is not None: status.close() return result
It returns the current and peak resident memory size (which is probably what people mean when they talk about how much RAM an application is using). It is easy to extend it to grab other pieces of information from the /proc/self/status
file.
For the curious: the full output of cat /proc/self/status
looks like this:
% cat /proc/self/status Name: cat State: R (running) Tgid: 4145 Pid: 4145 PPid: 4103 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 32 Groups: 20 24 25 29 40 44 46 100 1000 VmPeak: 3580 kB VmSize: 3580 kB VmLck: 0 kB VmHWM: 472 kB VmRSS: 472 kB VmData: 160 kB VmStk: 84 kB VmExe: 44 kB VmLib: 1496 kB VmPTE: 16 kB Threads: 1 SigQ: 0/16382 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: 03 Cpus_allowed_list: 0-1 Mems_allowed: 1 Mems_allowed_list: 0 voluntary_ctxt_switches: 0 nonvoluntary_ctxt_switches: 0
You could also use the getrusage()
function from the standard library module resource
. The resulting object has the attribute ru_maxrss
, which gives total peak memory usage for the calling process:
>>> import resource >>> resource.getrusage(resource.RUSAGE_SELF).ru_maxrss 2656
The Python docs aren't clear on what the units are exactly, but the Mac OS X man page for getrusage(2)
describes the units as kilobytes.
The Linux man page isn't clear, but it seems to be equivalent to the /proc/self/status
information (i.e. kilobytes) described in the accepted answer. For the same process as above, running on Linux, the function listed in the accepted answer gives:
>>> memory_usage() {'peak': 6392, 'rss': 2656}
This may not be quite as easy to use as the /proc/self/status
solution, but it is standard library, so (provided the units are standard) it should be cross-platform, and usable on systems which lack /proc/
(eg Mac OS X and other Unixes, maybe Windows).
Also, getrusage()
function can also be given resource.RUSAGE_CHILDREN
to get the usage for child processes, and (on some systems) resource.RUSAGE_BOTH
for total (self and child) process usage.
This will cover the memory_get_usage()
case, but doesn't include peak usage. I'm unsure if any other functions from the resource
module can give peak usage.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With