I want to profile time and memory usage of class method. I didn't find an out of box solution for this (are there such modules?), and I decided to use timeit
for time profiling and memory_usage
from memory_profiler
module.
I faced a problem of profiling methods with memory_profiler
. I've tried different variants, and none of them worked.
When I try to use partial from functools
, I get this error:
File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 126, in memory_usage
aspec = inspect.getargspec(f)
File "/usr/lib64/python2.7/inspect.py", line 815, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <functools.partial object at 0x252da48> is not a Python function
By the way, exactly the same approach works fine with timeit
function.
When I try to use lambda
as was I got this error:
File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 141, in memory_usage
ret = parent_conn.recv()
IOError: [Errno 4] Interrupted system call
How can I handle class methods with memory_profiler?
PS: I have memory-profiler (0.26) (installed with pip).
UPD: It's actually bug. You can check status here: https://github.com/pythonprofilers/memory_profiler/issues/47
Working with Python Memory Profiler You can use it by putting the @profile decorator around any function or method and running python -m memory_profiler myscript. You'll see line-by-line memory usage once your script exits.
Memory Profiler: Memory Profiler is an open-source Python module that uses psutil module internally, to monitor the memory consumption of Python functions. It performs a line-by-line memory consumption analysis of the function.
Memory Profiler is a pure Python module that uses the psutil module. It monitors the memory consumption of a Python job process. Also, it performs a line-by-line analysis of the memory consumption of the application. The line-by-line memory usage mode works in the same way as the line_profiler.
The Memory Profiler is a component in the Android Profiler that helps you identify memory leaks and memory churn that can lead to stutter, freezes, and even app crashes. It shows a realtime graph of your app's memory use and lets you capture a heap dump, force garbage collections, and track memory allocations.
If you want to see the change in memory allocated to the Python VM, you can use psutil. Here is a simple decorator using psuil that will print the change in memory:
import functools
import os
import psutil
def print_memory(fn):
def wrapper(*args, **kwargs):
process = psutil.Process(os.getpid())
start_rss, start_vms = process.get_memory_info()
try:
return fn(*args, **kwargs)
finally:
end_rss, end_vms = process.get_memory_info()
print((end_rss - start_rss), (end_vms - start_vms))
return wrapper
@print_memory
def f():
s = 'a'*100
In all likelihood, the output you will see will say no change in memory. This is because for small allocations, the Python VM may not need to request more memory from the OS. If you allocate a large array, you will see something different:
import numpy
@print_memory
def f():
return numpy.zeros((512,512))
Here you should see some change in memory.
If you want to see how much memory is used by each allocated object, the only tool I know of is heapy
In [1]: from guppy import hpy; hp=hpy()
In [2]: h = hp.heap()
In [3]: h
Out[3]:
Partition of a set of 120931 objects. Total size = 17595552 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 57849 48 6355504 36 6355504 36 str
1 29117 24 2535608 14 8891112 51 tuple
2 394 0 1299952 7 10191064 58 dict of module
3 1476 1 1288416 7 11479480 65 dict (no owner)
4 7683 6 983424 6 12462904 71 types.CodeType
5 7560 6 907200 5 13370104 76 function
6 858 1 770464 4 14140568 80 type
7 858 1 756336 4 14896904 85 dict of type
8 272 0 293504 2 15190408 86 dict of class
9 304 0 215064 1 15405472 88 unicode
<501 more rows. Type e.g. '_.more' to view.>
I have not used it in a long time, so I recommend experimenting and reading the documentation. Note that for an application using a large amount of memory, it can be extremely slow to calculate this information.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With