Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

nose2 vs py.test with isolated processes

We have been using nosetest for running and collecting our unittests (which are all written as python unittests which we like). Things we like about nose:

  • uses standard python unit tests (we like the structure this imposes).
  • supports reporting coverage and test output in xml (for jenkins).

What we are missing is a good way to run tests in isolated processes while maintaining good error repoorting (we are testing C++ libraries through python so segfaults should not be catastrophic). nosepipe seems to be no longer maintained and we have some problems with it.

We are trying to figure out whether we should - fix/use nosepipe - switch to nose2 and write nosepipe2. - use pytest or some other testing framework.

We would prefer to use an approach with a good community. It seems our problem (C++ plugins requiring good isolation) might be a common problem but googling I have not found solutions that are maintained. Advice from more experienced heads appreciated.

like image 418
jjh Avatar asked Aug 03 '12 19:08

jjh


1 Answers

pytest has the xdist plugin which provides the --boxed option to run each test in a controlled subprocess. Here is a basic example::

# content of test_module.py

import pytest
import os
import time

# run test function 50 times with different argument
@pytest.mark.parametrize("arg", range(50))
def test_func(arg):
    time.sleep(0.05) # each tests takes a while
    if arg % 19 == 0: 
        os.kill(os.getpid(), 15)

If you run this with::

$ py.test --boxed
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov
collecting ... collected 50 items

test_module.py f..................f..................f...........

================================= FAILURES =================================
_______________________________ test_func[0] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[19] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[38] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
=================== 3 failed, 47 passed in 3.41 seconds ====================

You'll see that a couple of tests are reported as crashing, indicated by lower-case f and the respective failure summary. You can also use the xdist-provided parallelization feature to speed up your testing::

$ py.test --boxed -n3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov
gw0 I / gw1 I / gw2 I
gw0 [50] / gw1 [50] / gw2 [50]

scheduling tests via LoadScheduling
..f...............f..................f............
================================= FAILURES =================================
_______________________________ test_func[0] _______________________________
[gw0] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[19] _______________________________
[gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[38] _______________________________
[gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
=================== 3 failed, 47 passed in 2.03 seconds ====================

In principle, just distributing to parallel subprocesses may often suffice and avoids the overhead of starting a boxed process for each test. This currently only works if you only have less crashing tests than the -n number of processes because a dying testing process is not restarted. This limitation could probably be removed without too much effort. Meanwhile you will have to use the safe boxing option.

like image 50
hpk42 Avatar answered Sep 19 '22 23:09

hpk42