I am updating an inherited repository which has poor test coverage. The repo itself is a pytest plugin. I've changed the repo to use tox
along with pytest-cov
, and converted the "raw" tests to use pytester
as suggested in the pytest documentation when testing plugins.
The testing and tox build, etc. works great. However, the coverage is reporting false misses with things like class definitions, imports, etc. This is because the code itself is being imported as part of pytest instantiation, and isn't getting "covered" until the testing actually starts.
I've read pytest docs, pytest-cov and coverage docs, and tox docs, and tried several configurations, but to no avail. I've exhausted my pool of google keyword combinations that might lead me to a good solution.
pkg_root/
.tox/
py3/
lib/
python3.7/
site-pacakges/
plugin_module/
supporting_module.py
plugin.py
some_data.dat
plugin_module/
supporting_module.py
plugin.py
some_data.dat
tests/
conftest.py
test_my_plugin.py
tox.ini
setup.py
Some relevant snippets with commentary:
[pytest]
addopts = --cov={envsitepackagesdir}/plugin_module --cov-report=html
testpaths = tests
This configuration gives me an error that no data was collected; no htmlcov is created in this case.
If I just use --cov
, I get (expected) very noisy coverage, which shows the functional hits and misses, but with the false misses reported above for imports, class definitions, etc.
pytest_plugins = ['pytester'] # Entire contents of file!
def test_a_thing(testdir):
testdir.makepyfile(
"""
def test_that_fixture(my_fixture):
assert my_fixture.foo == 'bar'
"""
)
result = testdir.runpytest()
result.assert_outcomes(passed=1)
How can I get an accurate report? Is there a way to defer the plugin loading until it's demanded by the pytester tests?
On the Test menu, select Analyze Code Coverage for All Tests. You can also run code coverage from the Test Explorer tool window.
You can use coverage.py with Python 2.6 up to the current version of Python 3 as well as with PyPy. The -m flag tells coverage.py that you want it to include the Missing column in the output. If you omit the -m, then you'll only get the first four columns.
Instead of using the pytest-cov plugin, use coverage to run pytest:
coverage run -m pytest ....
That way, coverage will be started before pytest.
You can achieve what you want without pytest-cov
.
❯ coverage run --source=<package> --module pytest --verbose <test-files-dirs> && coverage report --show-missing
❯ coverage run --source=<package> -m pytest -v <test-files-dirs> && coverage report -m
❯ coverage run --source=plugin_module -m pytest -v tests && coverage report -m
======================= test session starts ========================
platform darwin -- Python 3.9.4, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /Users/johndoe/.local/share/virtualenvs/plugin_module--WYTJL20/bin/python
cachedir: .pytest_cache
rootdir: /Users/johndoe/projects/plugin_module, configfile: pytest.ini
collected 1 items
tests/test_my_plugin.py::test_my_plugin PASSED [100%]
======================== 1 passed in 0.04s =========================
Name Stmts Miss Cover Missing
-------------------------------------------------------------
plugin_module/supporting_module.py 4 0 100%
plugin_module/plugin.py 6 0 100%
-------------------------------------------------------------
TOTAL 21 0 100%
For an even nicer output, you can use:
❯ coverage html && open htmlcov/index.html
❯ coverage -h
❯ pytest -h
coverage
run
-- Run a Python program and measure code execution.
-m
,--module
--- Show line numbers of statements in each module that weren't executed.
--source=SRC1,SRC2,
--- A list of packages or directories of code to be measured.
report
-- Report coverage stats on modules.
-m
,--show-missing
--- Show line numbers of statements in each module that weren't executed.
html
-- Create an HTML report.
pytest
-v, --verbose
-- increase verbosity.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With