I'm using py.test 2.2.4 and my testcases are organised as follows:
import pytest
class BaseTests():
def test_base_test(self):
pass
@pytest.mark.linuxonly
class TestLinuxOnlyLocal(BaseTests):
pass
@pytest.mark.windowsonly
class TestWindowsOnly(BaseTests):
pass
class TestEverywhere(BaseTests):
pass
The problem with this setup is that the decorator of the first class is leaking into the second class. When I create a conftest.py as follows:
import pytest
import sys
def pytest_runtest_setup(item):
print "\n %s keywords: %s" % (item.getmodpath(), item.keywords)
skip_message = None
if 'windowsonly' in item.keywords and not sys.platform.startswith('win'):
skip_message = "Skipped: Windows only test"
if 'linuxonly' in item.keywords and not sys.platform.startswith('linux'):
skip_message = "Skipped: Linux only test"
if skip_message is not None:
print skip_message
pytest.skip(skip_message)
When I execute this set the output shows that the markings seems to stack up:
$ py.test --capture=no
========================================== test session starts ===========================================
platform linux2 -- Python 2.7.3 -- pytest-2.2.4
collected 3 items
test_cases.py
TestLinuxOnlyLocal.test_base_test keywords: {'linuxonly': <MarkInfo 'linuxonly' args=() kwargs={}>, 'test_base_test': True}
.
TestWindowsOnly.test_base_test keywords: {'linuxonly': <MarkInfo 'linuxonly' args=() kwargs={}>, 'test_base_test': True, 'windowsonly': <MarkInfo 'windowsonly' args=() kwargs={}>}
Skipped: Windows only test
s
TestEverywhere.test_base_test keywords: {'linuxonly': <MarkInfo 'linuxonly' args=() kwargs={}>, 'test_base_test': True, 'windowsonly': <MarkInfo 'windowsonly' args=() kwargs={}>}
Skipped: Windows only test
s
================================== 1 passed, 2 skipped in 0.01 seconds ===================================
So I want to understand how it is possible that these markings leak between the sub-classes, and how this can be fixed/solved (the tests cases will live in the base class but the sub-classes will set up the necessary platform abstraction).
To use markers, we have to import pytest module in the test file. We can define our own marker names to the tests and run the tests having those marker names. -m <markername> represents the marker name of the tests to be executed.
Registering markers. Registering markers for your test suite is simple: # content of pytest. ini [pytest] markers = webtest: mark a test as a webtest.
Running pytest We can run a specific test file by giving its name as an argument. A specific function can be run by providing its name after the :: characters. Markers can be used to group tests. A marked grouped of tests is then run with pytest -m .
Description. Provides a pytest marker integration for integration tests. This marker automatically applies to all tests in a specified integration test folder. Integration tests will not run by default, which is useful for cases where an external dependency needs to be set up first (such as a database service).
pytest takes a more function-oriented approach to testing than other Python testing frameworks (e.g. unittest), so classes are viewed mainly as a way to organise tests.
In particular, markers applied to classes (or modules) are transferred to the test functions themselves, and since a non-overridden derived class method is the same object as the base class method, this means that the marker gets applied to the base class method.
(Technical detail: currently this happens in _pytest.python.transfer_markers()
, but don't rely on that.)
Instead of class inheritance, consider using fixtures to encapsulate the platform-specific test setup.
A simpler solution might be to compare against the class name, since py.test adds the immediate containing class to item keywords:
if 'TestWindowsOnly' in item.keywords and not sys.platform.startswith('win'):
skip_message = "Skipped: Windows only test"
if 'TestLinuxOnly' in item.keywords and not sys.platform.startswith('linux'):
skip_message = "Skipped: Linux only test"
In addition to ecatmur's good answer: You might want to define an pytest.mark.skipif
expression like this::
win32only = pytest.mark.skipif("sys.platform != 'win32'")
and then just decorate the win32-only tests with it::
@win32only
def test_something(...):
Another question is if you could maybe just turn the "BaseTests" into a normal test class::
class TestCrossPlatform:
def test_base_tests(...):
...
i.e. avoid any inheritance? If you then need fixtures in your tests, you can define them in your test module and accept them in the test functions (cross-platform or platform-specific ones), see pytest fixture docs. Be sure to use pytest-2.3.5
, though, because there have been a lot of improvements especially with respect to fixtures in the pytest-2.3
series (and more are to come with 2.4
).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With