In an attempt to do BDD style testing of some code, I have a set of tests which I want to be performed for multiple scenarios. I have done this many times in C# with NUnit & NSubstitute but I am struggling to achieve the desired result for C++ code with GoogleTest.
The concept of what I want to do - but does not even compile due to the pure virtual method in BaseTest is:
class BaseTest : public ::testing::Test {
protected:
int expected = 0;
int actual = 0;
virtual void SetUp() { printf("BaseTest SetUp()\r\n"); }
virtual void TearDown() { printf("BaseTest TearDown()\r\n"); }
virtual void PureVirtual() = 0;
};
TEST_F(BaseTest, BaseTest1)
{
printf("BaseTest BaseTest1\r\n");
ASSERT_EQ(expected, actual);
}
class ScenarioOne: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
virtual void PureVirtual() {}
};
class ScenarioTwo: public BaseTest {
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioTwo SetUp()\r\n");
actual = 98;
expected = 98;
}
virtual void PureVirtual() {}
};
The above code is greatly simplified, the BaseTest class would have 30+ tests defined and the Scenario classes would have extensive and complicated input data to exercise the code being tested and the expected results would be sizeable and non-trival - hence the idea of in a derived class SetUp() method, defining the input data and expected results and stimulating the code under test with the input data. The tests in the base class would then test the various actual results against the expected results and pass/fail as appropriate.
I have considered trying to use parametrized tests but due to the complex nature of the input data and expected results this looks difficult, plus for each new test scenario I believe it would mean modifying each of the tests to provide the input data and expected results as an additional parameter.
As I said earlier, I can do this sort of thing easily in C# but sadly I am working on a C++ project at this time. Is what I'm trying to do possible with GoogleTest?
The answer is: always test only concrete classes; don't test abstract classes directly . The reason is that abstract classes are implementation details. From the client perspective, it doesn't matter how Student or Professor implement their GetSignature() methods.
The docs for Google Test 1.7 suggest: If you have a broken test that you cannot fix right away, you can add the DISABLED_ prefix to its name. This will exclude it from execution. This is better than commenting out the code or using #if 0 , as disabled tests are still compiled (and thus won't rot).
gtest-parallel is a script that executes Google Test binaries in parallel, providing good speedup for single-threaded tests (on multi-core machines) and tests that do not run at 100% CPU (on single- or multi-core machines).
gMock is bundled with googletest.
OK - I've just thought of a potential solution.
Put all the tests in a header file like this:
// Tests.h - Tests to be performed for all test scenarios
TEST_F(SCENARIO_NAME, test1)
{
ASSERT_EQ(expected, actual);
}
The BaseTest class would just have basic SetUp()/TearDown() methods, member variables to hold the expected and actual results plus any helper functions for the derived scenario classes - but no tests so could be abstract if wanted.
Then for each scenario:
class ScenarioOne: public BaseTest
{
public:
virtual void SetUp()
{
BaseTest::SetUp();
printf("ScenarioOne SetUp()\r\n");
actual = 20;
expected = 20;
}
};
#define SCENARIO_NAME ScenarioOne
#include "Tests.h"
The resultant effect is a set of tests defined once which can then be applied to multiple test scenarios.
It does seem like a bit of a cheat so I'm interested if anyone has a better way of doing it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With