Can someone suggest what would be the best practice or a suitable library to determine:
I had looked at guppy and meliae, but still can't get granular to the function level? Am I missing something?
UPDATE The need for asking this question is to solve a specific situation which is, the scenario is that we have a set of distributed tasks running on cloud instances, and now we need to reorganize the placement of tasks on right instance types withing the cluster, for example high memory consuming functional tasks would be placed on larger memory instances and so on. When I mean tasks (celery-tasks), these are nothing but plain functions for which we need to now profile their execution usage.
Thanks.
Here's is how you can check your PC's system resource usage with Task Manager. Press CTRL + Shift + Esc to open Task Manager. Click the Performance tab. This tab displays your system's RAM, CPU, GPU, and disk usage, along with network info.
While the Task Manager is open, you'll see a Task Manager icon in your notification area. This shows you how much CPU (central processing unit) resources are currently in use on your system, and you can mouse over it to see memory, disk, and network usage. It's an easy way to keep tabs on your computer's CPU usage.
You may want to look into a CPU profiler
for Python:
http://docs.python.org/library/profile.html
Example output of cProfile.run(command[, filename])
2706 function calls (2004 primitive calls) in 4.504 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
2 0.006 0.003 0.953 0.477 pobject.py:75(save_objects)
43/3 0.533 0.012 0.749 0.250 pobject.py:99(evaluate)
...
Also, memory
needs a profiler too:
open source profilers: PySizer and Heapy
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With