Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do you find the CPU consumption for a piece of Python?

Background

I have a Django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down.

Problem:

  • Profiling the application gives me time taken by functions.
  • This time increases on high load.
  • Time consumed may be due to complex calculation or for waiting for CPU.

So, how to find the CPU cycles consumed by a piece of code ?

Since reducing the CPU consumption will increase the response time.

  • I might have written extremely efficient code and need to add more CPU power

OR

  • I might have some stupid code taking the CPU and causing the slow down ?

Update

  • I am using Jmeter to profile my web app, it gives me a throughput of 2 requests/sec. [ 100 users]
  • I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request.

More Info

  • Configuration Nginx + Uwsgi with 4 workers
  • No database used, using a responses from a REST API
  • On 1st hit the response of REST API gets cached, therefore doesn't makes a difference.
  • Using ujson for json parsing.

Curious to know:

  • Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools.
  • All those I found were casual snippets of code that perform profiling.
like image 949
Yugal Jindle Avatar asked Jun 04 '12 06:06

Yugal Jindle


People also ask

How do I see CPU usage in Python?

Use the os Module to Retrieve Current CPU Usage in Python We can use the cpu_count() function from this module to retrieve the CPU usage. The psutil. getloadavg() function provides the load information about the CPU in the form of a tuple. The result obtained from this function gets updated after every five minutes.

Does Python use a lot of RAM?

Those numbers can easily fit in a 64-bit integer, so one would hope Python would store those million integers in no more than ~8MB: a million 8-byte objects. In fact, Python uses more like 35MB of RAM to store these numbers.


1 Answers

You could try configuring your test to ramp up slowly, slow enough so that you can see the CPU gradually increase and then run the profiler before you hit high CPU. There's no point trying to profile code when the CPU is maxed out because at this point everything will be slow. In fact, you really only need a relatively light load to get useful data from a profiler.

Also, by gradually increasing the load you will be better able to see if there is a gradual increase in CPU (suggesting a CPU bottleneck) or if there is a sudden jump in CPU (suggesting perhaps another type of problem, one that would not necessarily be addressed by more CPU).

Try using something like a Cosntant Throughput Timer to pace the requests, this will prevent JMeter getting carried away and over-loading the system.

like image 101
Oliver Lloyd Avatar answered Sep 29 '22 16:09

Oliver Lloyd