Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance testing best practices when doing TDD?

I'm working on a project which is in serious need of some performance tuning.

How do I write a test that fails if my optimizations do not in improve the speed of the program?

To elaborate a bit:

The problem is not discovering which parts to optimize. I can use various profiling and benchmarking tools for that.

The problem is using automated tests to document that a specific optimization did indeed have the intended effect. It would also be highly desirable if I could use the test suite to discover possible performance regressions later on.

I suppose I could just run my profiling tools to get some values and then assert that my optimized code produces better values. The obvious problem with that, however, is that benchmarking values are not hard values. They vary with the local environment.

So, is the answer to always use the same machine to do this kind of integration testing? If so, you would still have to allow for some fuzziness in the results, since even on the same hardware benchmarking results can vary. How then to take this into account?

Or maybe the answer is to keep older versions of the program and compare results before and after? This would be my preferred method, since it's mostly environment agnostic. Does anyone have experience with this approach? I imagine it would only be necessary to keep one older version if all the tests can be made to pass if the performance of the latest version is at least as good as the former version.

like image 286
KaptajnKold Avatar asked Apr 15 '09 13:04

KaptajnKold


People also ask

What is recommended while doing TDD?

In TDD, coding needs to be limited but effective in that it achieves its purpose without breaking anything else. Also, the new code should ideally pass the test case in the very first run.

How do you measure effectiveness of TDD?

TDD should generate at least 1 unit test per class. For every class in a project there should be at least 1 unit test that was written for that class, and it tests just that class. Unit test to class ratios, code coverage, etc.. etc.. seem like things that give a meaningful measurement, but they don't.


2 Answers

I suspect that applying TDD to drive performance is a mistake. By all means, use it to get to good design and working code, and use the tests written in the course of TDD to ensure continued correctness - but once you have well-factored code and a solid suite of tests, you are in good shape to tune, and different (from TDD) techniques and tools apply.

TDD gives you good design, reliable code, and a test coverage safety net. That puts you into a good place for tuning, but I think that because of the problems you and others have cited, it's simply not going to take you much further down the tuning road. I say that as a great fan and proponent of TDD and a practitioner.

like image 57
Carl Manaster Avatar answered Sep 29 '22 23:09

Carl Manaster


First you need to establish some criteria for acceptable performance, then you need to devise a test that will fail that criteria when using the existing code, then you need to tweak your code for performance until it passes the test. You will probably have more than one criteria for performance, and you should certainly have more than one test.

like image 25
Ed Guiness Avatar answered Sep 29 '22 23:09

Ed Guiness