Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Proper way to automatically test performance in Python (for all developers)?

Our Python application (a cool web service) has a full suite of tests (unit tests, integration tests etc.) that all developers must run before committing code.
I want to add some performance tests to the suite to make sure no one adds code that makes us run too slow (for some rather arbitrary definition of slow).
Obviously, I can collect some functionality into a test, time it and compare to some predefined threshold.

The tricky requirements:

  1. I want every developer to be able to test the code on his machine (varies with CPU power, OS(! Linux and some Windows) and external configurations - the Python version, libraries and modules are the same). A test server, while generally a good idea, does not solve this.
  2. I want the test to be DETERMINISTIC - regardless of what is happening on the machine running the tests, I want multiple runs of the test to return the same results.

My preliminary thoughts:

  • Use timeit and do a benchmark of the system every time I run the tests. Compare the performance test results to the benchmark.
  • Use cProfile to instrument the interpreter to ignore "outside noise". I'm not sure I know how to read the pstats structure yet, but I'm sure it is doable.

Other thoughts?

Thanks!

Tal.

like image 839
Tal Weiss Avatar asked Apr 01 '11 07:04

Tal Weiss


People also ask

What are the 4 types of testing in Python 3?

There are 4 types of testing available in Python – Unit Testing, Feature Testing, Integration Testing & Performance Testing.


1 Answers

Check out funkload - it's a way of running your unit tests as either functional or load tests to gauge how well your site is performing.

Another interesting project which can be used in conjunction with funkload is codespeed. This is an internal dashboard that measures the "speed" of your codebase for every commit you make to your code, presenting graphs with trends over time. This assumes you have a number of automatic benchmarks you can run - but it could be a useful way to have an authoritative account of performance over time. The best use of codespeed I've seen so far is the speed.pypy.org site.

As to your requirement for determinism - perhaps the best approach to that is to use statistics to your advantage? Automatically run the test N times, produce the min, max, average and standard deviation of all your runs? Check out this article on benchmarking for some pointers on this.

like image 91
rlotun Avatar answered Sep 21 '22 09:09

rlotun