I'm trying to get more useful output from pytest -tb=no
short output. I have integration tests stored in JSON files, so the output all looks extremely similar.
tests/test_dit_cli.py .......F............................. [ 29%]
...F...F.FF........F............................F...FFFFFFF [ 75%]
FFF.F..................F.....FF [100%]
===================== short test summary info =====================
FAILED tests/test_dit_cli.py::test_dits[dit_json7] - assert "Lin...
FAILED tests/test_dit_cli.py::test_dits[dit_json40] - assert "Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json44] - assert "Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json46] - assert "Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json47] - assert "Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json56] - assert "Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json85] - assert "Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json89] - AssertionE...
FAILED tests/test_dit_cli.py::test_dits[dit_json90] - AssertionE...
FAILED tests/test_dit_cli.py::test_dits[dit_json91] - AssertionE...
FAILED tests/test_dit_cli.py::test_dits[dit_json92] - AssertionE...
FAILED tests/test_dit_cli.py::test_dits[dit_json93] - AssertionE...
FAILED tests/test_dit_cli.py::test_dits[dit_json94] - AssertionE...
FAILED tests/test_dit_cli.py::test_dits[dit_json95] - AssertionE...
FAILED tests/test_dit_cli.py::test_dits[dit_json96] - assert 'Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json97] - assert 'Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json98] - assert "Li...
FAILED tests/test_dit_cli.py::test_dits[dit_json100] - Assertion...
FAILED tests/test_dit_cli.py::test_dits[dit_json119] - assert "L...
FAILED tests/test_dit_cli.py::test_dits[dit_json125] - Assertion...
FAILED tests/test_dit_cli.py::test_dits[dit_json126] - Assertion...
================= 21 failed, 106 passed in 2.94s ==================
Seeing this same tests/test_dit_cli.py::test_dits[dit_json126]
20 times doesn't help me get a gauge on what's going wrong in the project, so I usually just fix errors one test at a time. Each test entry has extra information about the type of test being run and the expected outcome, but I don't know how to get that information into pytest. I would hope for something like this:
===================== short test summary info =====================
FAILED [func, vanilla Python] - assert "Li...
FAILED [Thing, value assignment] - assert "Li...
FAILED [TypeMismatch, String var assigned to List] - assert "Lin...
I actually got close to this, by providing a value for ids
in the parametrize
call.
def pytest_generate_tests(metafunc: Metafunc):
for fixture in metafunc.fixturenames:
if fixture == "dit_json":
test_dicts = list(load_from_json())
titles = [test_dict["title"] for test_dict in test_dicts]
metafunc.parametrize(argnames=fixture, argvalues=test_dicts, ids=titles)
FAILED tests/test_dit_cli.py::test_dits[TypeMismatch, List var assigned to String]
FAILED tests/test_dit_cli.py::test_dits[import, anon import referenced in list assignment]
So, I'm really close, I just want to remove the filepath, so that the line is shorter. Is there a way to change the filepath of where it thinks the tests are located? Or a hook that would let me arbitrarily modify the summary output? I tried modifying pytest_collection_modifyitems
and changing item.fspath
, but it didn't change anything in the output. I've seen ways to modify lots of other things about the output, but nothing regarding specifically that filepath.
If you just want to shorten the nodeids in the short summary info, you can overwrite the nodeid
attribute of the report
object. A simple example:
def pytest_runtest_logreport(report):
report.nodeid = "..." + report.nodeid[-10:]
placed in your conftest.py
, will truncate each nodeid to its last ten chars:
=========================== short test summary info ===========================
FAILED ...st_spam[0] - assert False
FAILED ...st_spam[1] - assert False
FAILED ...st_spam[2] - assert False
FAILED ...st_spam[3] - assert False
FAILED ...st_spam[4] - assert False
FAILED ...:test_eggs - assert False
If you want a fully customized short test summary lines, you need to implement a custom TerminalReporter
and replace the vanilla one early enough in the test run. Example stub:
import pytest
from _pytest.terminal import TerminalReporter
class MyReporter(TerminalReporter):
def short_test_summary(self):
# your own impl goes here, for example:
self.write_sep("=", "my own short summary info")
failed = self.stats.get("failed", [])
for rep in failed:
self.write_line(f"failed test {rep.nodeid}")
@pytest.mark.trylast
def pytest_configure(config):
vanilla_reporter = config.pluginmanager.getplugin("terminalreporter")
my_reporter = MyReporter(config)
config.pluginmanager.unregister(vanilla_reporter)
config.pluginmanager.register(my_reporter, "terminalreporter")
This will produce a summary section like
========================== short test summary info ===========================
failed test tests/test_spam.py::test_spam[0]
failed test tests/test_spam.py::test_spam[1]
failed test tests/test_spam.py::test_spam[2]
failed test tests/test_spam.py::test_spam[3]
failed test tests/test_spam.py::test_spam[4]
failed test tests/test_spam.py::test_eggs
Note that the above impl of MyReporter.short_test_summary()
is not complete and only put for demonstration purposes! For a reference, check out the pytest
impl.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With