Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

pytest parameterized session fixtures execute too many times

Tags:

python

pytest

Consider the following test code, which compares a mock run result with an expected result. The value of the run result depends on a value of a parameterized fixture paramfixture, which provides two values, so there are two possible variants of the run result. Since they are all session fixtures we should expect the run_result fixture execute only two times.

Now, please take a look at the test case test_run_result, which receives the run_result and expected_result fixtures to compare, and also receives the tolerance fixture, which is parameterized with two values. The test case checks if the difference between expected and resulted falls within the tolerance. Note that the run does not depend on the tolerance.

For some reason, which I don’t understand Pytest executes the run_result() fixture three times. Can you explain why?

This was tested using pytest vers. 2.9.1

By the way, the run_result fixture would execute only two times if the test case weren't parameterized or were parameterized using a decoractor instead of a fixture i.e.: @pytest.mark.parametrize('tolerance', [1e-8, 1e-11]).

import pytest

runcounter = 0

@pytest.fixture(scope="session", params=[1e-8, 1e-11])
def tolerance(request):
    """Precision in floating point compare."""
    return request.param

@pytest.fixture(scope='session', params=[1, 2])
def paramfixture(request):
    return request.param

@pytest.fixture(scope="session")
def expected_result(paramfixture):
    return 1 + paramfixture

@pytest.fixture(scope='session')
def run_result(paramfixture):
    global runcounter
    runcounter = runcounter + 1
    print "Run #", runcounter, 'param:', paramfixture
    return 1 + paramfixture

def test_run_result(run_result, expected_result, tolerance):
    print "run_result: %d, expected_result: %d" % (run_result, expected_result)
    assert abs(run_result - expected_result) < tolerance

Pytest screenshot:

$ py.test -vs test/end2end/test_temp.py
===================================================== test session starts ======================================================
platform linux2 -- Python 2.7.11, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- /home/f557010/.conda/envs/sfpdev/bin/python
cachedir: .cache
rootdir: /home/f557010/svndev/SFP, inifile: pytest.ini
collected 4 items

test/end2end/test_temp.py::test_run_result[1e-08-1] Run # 1 param: 1
run_result: 2, expected_result: 2
PASSED
test/end2end/test_temp.py::test_run_result[1e-08-2] Run # 2 param: 2
run_result: 3, expected_result: 3
PASSED
test/end2end/test_temp.py::test_run_result[1e-11-2] 
run_result: 3, expected_result: 3
PASSED
test/end2end/test_temp.py::test_run_result[1e-11-1] Run # 3 param: 1
run_result: 2, expected_result: 2
PASSED

=================================================== 4 passed in 0.01 seconds ===================================================
like image 527
Victor Olex Avatar asked Nov 09 '22 08:11

Victor Olex


1 Answers

pytest's parameterization is all about getting a fixture and holding on to it for a reasonable lifecycle. It does not cache all of the input->output mappings. This is not what you wated here but it makes sense if you consider fixtures being things like database connections or tcp connections (like the smtp in the examples).

You still have a decent argument to make here that enough introspection and optimization on pytest's part would have benefited you (presuming here that run_result is very expensive and you wish to minimize runs).

Why does it do "the wrong thing" here? If you look carefully at the fixtures, tolerance is the "first order" or closest parameterized fixture.

An ugly, inscrutable change that "works":

@pytest.fixture(scope="session", params=[0.01, 0.0002])
def tol(request):
    """Precision in floating point compare."""
    return request.param

@pytest.fixture(scope="session")
def tolerance(tol):
    """Precision in floating point compare."""
    return tol

Why in the world would that work? It removed the tolerance param to the same "level" as the param on the other fixtures. With this, pytest does in fact only run the run_tests twice.

============================================ test session starts ============================================
<snip>
collected 4 items

test_tolerance.py::test_run_result[1-0.01] Run # 1 param: 1
run_result: 2, expected_result: 2 tolerance: 0.010000
PASSED
test_tolerance.py::test_run_result[1-0.0002]
run_result: 2, expected_result: 2 tolerance: 0.000200
PASSED
test_tolerance.py::test_run_result[2-0.0002] Run # 2 param: 2
run_result: 3, expected_result: 3 tolerance: 0.000200
PASSED
test_tolerance.py::test_run_result[2-0.01]
run_result: 3, expected_result: 3 tolerance: 0.010000
PASSED

========================================= 4 passed in 0.01 seconds ==========================================

Should you use that code? Please try not to as it is too hard to grok, if you use a hack like that, comment heavily to 'splain yourself.

You asked "why" and the key here is that the params for tolerance and paramfixture are at different levels of nesting with the "closest" one being the one that will iterate slowest. fixtures aren't cached here, they are just used in a logical order, innermost iterating fastest.

like image 94
Phil Cooper Avatar answered Nov 14 '22 23:11

Phil Cooper