Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to configure pytest to avoid collection failure on missing imports?

Tags:

python

pytest

I do have some more complex projects where if you run pytest --collect-only you will endup with lots of import failures caused by discovered test files that have imports of stuff that were not installed yet.

I want to change these test files in such way that they would not fail on collection.

This is because the user may want to run tests using a specific pattern like pytest -k foo, but he would not be able to do this if collection fails on unrelated tests.

I know that I can define a pytest_configure method that would be called durind collection but if I move the imports into it I would still get failures later when the interpreter is reaching code trying to use the missing imports.

def pytest_configure(config):
    try:
        import foo
    except:
        pytest.skip("skipping when foo is not installed")

def test_foo()
  assert foo.is_enabled()  # <-- foo is undefined here

My example is clearly over similified as we all know that I could add the import again in the test method but I do not want to do this in tens of methods. I am looking for a cleaner solution.

like image 619
sorin Avatar asked Jul 11 '19 20:07

sorin


People also ask

How do I skip Testcases in pytest?

Skipping a test The simplest way to skip a test is to mark it with the skip decorator which may be passed an optional reason . It is also possible to skip imperatively during test execution or setup by calling the pytest. skip(reason) function.

What is pytest AssertionError?

In Python, like many other languages, there is a statement that checks a given condition, and raises an AssertionError if this condition is False, otherwise, it does nothing if the given condition is True. That's basically what assert does.

Why pytest is taking too long?

In summary, if you have slow compiling speeds with pytest, try specifying the directory or file your test(s) are in. Or use norecursedirs to skip directories that don't have any tests, like src or . git . in my case it takes 20 seconds to load, even by specifying a test file with 2 unit tests inside.

What is pytest skip?

pytest-skip-markers 1.3. 0 It's a collection of of useful skip markers created to simplify and reduce code required to skip tests in some common scenarios, for example, platform specific tests.


1 Answers

If you don't want to abort test collection on import (or any other) errors, use the --continue-on-collection-errors flag. Example:

test_spam.py has an unresolved import:

import foo


def test_foo():
    assert foo.is_enabled()

test_eggs.py is runnable:

def test_bar():
    assert True

Running the tests yields:

$ pytest --continue-on-collection-errors -v
======================================= test session starts =======================================
...
collected 1 item / 1 errors                                                                       

test_eggs.py::test_bar PASSED

============================================= ERRORS ==============================================
__________________________________ ERROR collecting test_spam.py __________________________________
ImportError while importing test module '/home/hoefling/projects/private/stackoverflow/so-56997157/test_spam.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_spam.py:2: in <module>
    import foo
E   ModuleNotFoundError: No module named 'foo'
================================ 1 passed, 1 error in 0.06 seconds ================================

This will instruct pytest to run all the tests it can collect (test_eggs::test_bar in this example), but fail the execution since one module couldn't be collected. If you don't want to fail the test run, pytest offers a handy importorskip function:

import pytest
foo = pytest.importorskip('foo')


def test_foo():
    assert foo.is_enabled()

Running the tests now yields:

$ pytest -v
======================================= test session starts =======================================
...
collected 1 item / 1 skipped                                                                      

test_eggs.py::test_bar PASSED                                                               [100%]

=============================== 1 passed, 1 skipped in 0.04 seconds ===============================

As you can see, the difference between those two is the handling of collection errors (1 errors vs 1 skipped) and the resulting exit code (1 vs 0).


Personally, I tend to not using importorskip because it can lead to tests not being executed in the long run. If you have a large test suite, usually you just take a brief look at the test result (failed/not failed) and don't explicitly check whether there are new skipped tests. This can lead to situations when the tests are not executed anywhere (not even on the CI server) until someone notices it (best case), or (worst case) the code that is assumed to be tested unveils an error on a production system. Of course, there are other metrics available that can guard that case indirectly (like watching out for the test coverage and prohibiting commits that lower it), but IMO an explicit failure sets a huge exclamation mark that you can't just avoid.

Source: Skipping on a missing import dependency.

like image 64
hoefling Avatar answered Oct 22 '22 02:10

hoefling