Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

phpunit - testing is painfully slow

I am diving deeper and deeper in the world of unit testing.

One issue I encountered, and this is where I would like feedback, is when one runs multiple test suites, maybe it is just me but I need to use the parameter --process-isolation for my tests to pass. I can run any of my suites individually without a problem, but running the 6-7 suites I have so far with 180 assertions spread between them fails if I run without --process-isolation. The problem is that using this parameter makes the test run last for 35 mins, versus the usual 2.5 minutes. That's a loooong wait.

The problem is related to using mocked DI containers for specific tests and containers are not properly re-initialised when tests suites are running chained. Static properties set on DI-Container to test for expected failures make the tests in following suite fail. The container has a parameter that can hold contained object in a static var, to return the same instance at every call. A singleton in disguise. And this runs fine on application level, it's just a nuisance for testing.

I could avoid that container parameter and code the application to not use static properties, but avoiding a useful language construct for the sake of a methodology seems like overkill.

Maybe I am doing something wrong (I sure hope so!) but I have the impression if one wants to run tests with the SUT in a clean state for every test, there is no getting around using --process-isolation. This makes testing very time consuming and takes the joy out of it a little bit. I have bypassed the issue somewhat by running suites and tests individually when I am coding, and running the suite in background before major commits.

Is what I am experiencing normal, and is there a way to counter this? How do you testers out there ensure testing time is reasonable? How are statics handled so as to not influence testing?

Any insight appreciated/comment appreciated.

like image 655
stefgosselin Avatar asked May 29 '11 11:05

stefgosselin


1 Answers

You have several problems.

The first is process isolation. Normally, it should not be necessary and you only want to use it to find out which specific test is the one that fatally breaks your tests. As you noted yourself, it's awfully slow, which is something you cannot fix. You might want to disable backing up global vars which saves some milliseconds per test, though.

The second problem, which leads to your first problem, is that your code is not testable because static vars are kept during tests - my most-hated singleton problem. You can solve that problem by providing a "cleanup" or "reset" method in your dependency containers. Those will get called from the setUp() method in your main test case class and reset everything to a clean state.

Speed

Regarding the runtime of tests - I recently wrote a blog entry about finding out which tests were too slow. Generally, tests are too slow if you can't run them after saving the file or each commit on your own box. 10 seconds is barely acceptable for me. The more tests you have, the slower will running them be.

If you really have 35 minutes, then split up your tests into sensible groups so that you can run the necessary ones on your own machine - only the tests that test the code you changed. Pyrus, the next-gen PEAR installer, has the nifty feature to automatically detect and run the tests that need to be run, depending on what files you changed. PHPUnit does not have that, but you can emulate that by hand and phpunit --group .. :)

Always take care of mocking web services and databases, or at least running the database with only necessary data for each single test. Waiting 3 seconds for a web services response in a test that's verifying if you can save the user to database is something you never want.

like image 152
cweiske Avatar answered Oct 10 '22 13:10

cweiske