I am new to this so please do not mind if the question is not specific enough.
I want to know how to club unit tests into a single integration test in pytest. Furthermore, I would like to repeat the integration test in a single test session a couple of times. Please let me know if there is a way to do this in pytest.
Scenario: I have two unit tests name test_start_call and test_end_call that are invoked by pytest in that order.
Now I wanted to repeat the process a couple of times so I did this:
for i in range(0,c): pytest.main(some command)
which works fine which will start the test session and tear down the test session as many times as I want with one call being made in each test session.
But I want to make several calls in a single test session and by far I have not found any way to do this since last two days. I tried looking into xdist but I don't want to start new processes in parallel. The integration tests should serially execute unit tests (start call and end call) as many times as I want in a single test session.
I am stuck. So any help would be great. Thank you!
Test categorization in particular is a subtly powerful tool. pytest enables you to create marks, or custom labels, for any test you like. A test may have multiple labels, and you can use them for granular control over which tests to run.
unittest provides a solid base on which to build your test suite, but it has a few shortcomings. A number of third-party testing frameworks attempt to address some of the issues with unittest, and pytest has proven to be one of the most popular. pytest is a feature-rich, plugin-based ecosystem for testing your Python code.
Consult the docs per library, as the command-line invocation will vary. I can now confirm pytest also has auto test-discovery. When separated like this, you can also run pytest tests/unit tests/integration to execute the integration tests after the unit tests.
If you can write an expression that you expect to evaluate to True, then pytest will test it for you. You can run it using the pytest command: pytest presents the test results differently than unittest. The report shows:
review https://docs.pytest.org/en/latest/parametrize.html
Then add mult marker to each test and consume it in hook pytest_generate_tests to provide multiple tests fixture value will be visible in --collect-only --mult 3. Using marker this way will constrain the multiple tests mechanism to only marked tests.
# conftest
def pytest_addoptions(parser):
parser.addoption('--mult', default=0, help="run many tests")
def pytest_generate_tests(metafunc):
count = int(metafunc.config.getoption('--mult'))
if count and metafunc.get_closest_marker('mult'):
if 'mult' not in metafunc.fixturenames:
metafunc.fixturenames.append('mult')
metafunc.parametrize("mult", range(count))
# testfoo
@pytest.mark.mult
def test_start_call():
...
From what you're saying, I'm not quite sure that you are using the right toolset. It sounds like you are either trying to load test something ( run it multiple times and see if it falls over ), or trying to do something more "data driven" - aka given input values x through y, see how it behaves.
If you are trying to do something like load testing, I'd suggest looking into something like locust.
Here is a reasonable blog with different examples on driving unit tests via different data.
Again, not sure if either of these are actually what you're looking for.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With