I'm trying to test tensorflow program. I'm setting up tensorflow session using parametrized py.test fixture:
@pytest.fixture(scope="session", params=configuration)
def session(request):
if request.param == 'tensorflow':
return tf.Session()
elif request.param == 'tensorflow-eager':
tfe.enable_eager_execution()
return tf.Session()
elif ...
Tensorflow has global state, thus several test launches can pollute it. For example there is no way to disable eager execution after it has been enabled. Is there a way to instruct py.test to create a new process for each test? Or another way to configure environment for the test besides using parametrized fixture? Example usage:
@pytest.mark.parametrize("bias_type", ['variable', 'ndarray', 'list', 'tuple'])
@pytest.mark.parametrize("kernel_type", ['variable', 'ndarray', 'list', 'tuple'])
@pytest.mark.parametrize("input_type", ['variable', 'ndarray', 'list', 'tuple'])
def test_convolution(session, input_type, kernel_type, bias_type):
...
We can run a specific test file by giving its name as an argument. A specific function can be run by providing its name after the :: characters. Markers can be used to group tests. A marked grouped of tests is then run with pytest -m .
Run Multiple Tests From a Specific File and Multiple Files To run all the tests from all the files in the folder and subfolders we need to just run the pytest command. This will run all the filenames starting with test_ and the filenames ending with _test in that folder and subfolders under that folder.
By default, pytest runs tests in sequential order. In a real scenario, a test suite will have a number of test files and each file will have a bunch of tests. This will lead to a large execution time. To overcome this, pytest provides us with an option to run tests in parallel.
Repeating a test Each test collected by pytest will be run count times. If you want to override default tests executions order, you can use --repeat-scope command line option with one of the next values: session , module , class or function (default). It behaves like a scope of the pytest fixture.
As suggested in the comments, using pytest-xdist
will be the solution. The plugin is designed for parallel or distributed execution of tests (even a multi-platform execution is possible), but is well suited to serve your request of running each test in a separate process - you can achieve this with the --forked
argument.
--forked
argument will not work on Windows, because Windows doesn't support the fork-exec model and doesn't ship any replacement for fork()
.
Let's define a fixture that will attempt to turn the eager execution on before running each test:
from tensorflow.contrib.eager.python import tfe
import pytest
@pytest.fixture(scope='function', autouse=True)
def eager(request):
tfe.enable_eager_execution()
This fixture will obviously fail all tests but the first one since the eager execution can be turned only once. With some dummy tests:
def test_spam():
assert True
def test_eggs():
assert True
def test_bacon():
assert True
Running plain pytest
fails as expected:
$ pytest -v
============================== test session starts ================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0 -- /Users/hoefling/.virtualenvs/stackoverflow/bin/python3.6
cachedir: .cache
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48234032, inifile:
plugins: forked-0.2, mock-1.6.3, hypothesis-3.44.4
collected 3 items
test_spam.py::test_spam PASSED [ 33%]
test_spam.py::test_eggs ERROR [ 66%]
test_spam.py::test_bacon ERROR [100%]
...
E ValueError: Do not call tfe.enable_eager_execution more than once in the
same process. Note eager-mode methods such as tfe.run() also call
tfe.enable_eager_execution.
...
Now install pytest-xdist
:
$ pip install pytest-xdist
and rerun the tests:
$ pytest -v --forked
============================== test session starts ================================
platform darwin -- Python 3.6.3, pytest-3.3.1, py-1.5.2, pluggy-0.6.0 -- /Users/hoefling/.virtualenvs/stackoverflow/bin/python3.6
cachedir: .cache
rootdir: /Users/hoefling/projects/private/stackoverflow/so-48234032, inifile:
plugins: forked-0.2, xdist-1.22.0, mock-1.6.3, hypothesis-3.44.4
collected 3 items
test_spam.py::test_spam PASSED [ 33%]
test_spam.py::test_eggs PASSED [ 66%]
test_spam.py::test_bacon PASSED [100%]
============================= 3 passed in 6.09 seconds ============================
The tests still run sequentially, but each one in an own subprocess, so none of them fails.
Now you can start experimenting with parallel execution, e.g.
$ pytest -v --forked --numprocesses=auto
etc. Refer to plugin docs for more info and more usage examples.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With