Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to write a unit-test where each test case has different input but does the same?

I need to create a unit-test for some python class. I have a database of inputs and expected results which should be generated by the UUT for those inputs.

Here is the pseudo-code of what I want to do:

for i=1 to NUM_TEST_CASES:
    Load input for test case i
    execute UUT on the input and save output of run
    Load expected result for test case i
    Compare output of run with the expected result

Can I achieve this using the unittest package or is there some better testing package for this purpose?

like image 879
zr. Avatar asked Feb 03 '10 23:02

zr.


People also ask

Which annotation makes it run a test multiple times with different parameters?

@ParameterizedTest. Parameterized tests make it possible to run a test multiple times with different arguments. They are declared just like regular @Test methods but use the @ParameterizedTest annotation instead.

Do you know what a unit test is and an integration test what is the difference and a system test?

Unit Testing is a kind of white box testing, whereas Integration Testing is a kind of black-box testing. For Unit Testing, accessibility of code is required, as it tests the written code, while for Integration Testing, access to code is not required, since it tests the interactions and interfaces between modules.

Can a unit test test multiple classes?

A test can exercise methods from three objects and still be a unit test. Some may claim that's integration but three lines of code are just as "integrated" so that's not really a thing. Method and object boundaries don't present any significant barrier to testing.


2 Answers

The way you describe testing is an odd match for Unit Testing in general. Unit testing does not -- typically -- load test data or rest results from external files. Generally, it's simply hard-coded in the unit test.

That's not to say that your plan won't work. It's just to say that it's atypical.

You have two choices.

  1. (What we do). Write a little script that does the "Load input for test case i", and "Load expected result for test case i". Use this to generate the required unittest code. (We use Jinja2 templates to write Python code from source files.)

    Then delete the source files. Yes, delete them. They'll only confuse you.

    What you have left is proper Unittest files in the "typical" form with static data for the test case and expected results.

  2. Write your setUp method to do the "Load input for test case i", and "Load expected result for test case i". Write your test method to exercise the UUT.

It might look like this.

class OurTest( unittest.TestCase ):
    def setUp( self ):
        self.load_data()
        self.load_results()
        self.uut = ... UUT ...
    def runTest( self ):
        ... exercise UUT with source data ...
        ... check results, using self.assertXXX methods ...

Want to run this many times? One way it to do something like this.

class Test1( OurTest ):
    source_file = 'this'
    result_file = 'that'

class Test2( OutTest ):
    source_file= 'foo'
    result_file= 'bar'

This will allow the unittest main program to find and run your tests.

like image 75
S.Lott Avatar answered Oct 31 '22 20:10

S.Lott


We do something like this in order to run what are actually integration (regression) tests within the unittest framework (actually an in-house customization thereof which gives us enormous benefits such as running the tests in parallel on a cluster of machines, etc, etc -- the great added value of that customization is why we're so keen to use the unittest framework).

Each test is represented in a file (the parameters to use in that test, followed by the expected results). Our integration_test reads all such files from a directory, parses each of them, and then calls:

def addtestmethod(testcase, uut, testname, parameters, expresults):
  def testmethod(self):
    results = uut(parameters)
    self.assertEqual(expresults, results)
  testmethod.__name__ = testname
  setattr(testcase, testname, testmethod)

We start with an empty test case class:

class IntegrationTest(unittest.TestCase): pass

and then call addtestmethod(IntegrationTest, ... in a loop in which we're reading all the relevant files and parsing them to get testname, parameters, and expresults.

Finally, we call our in-house specialized test runner which does the heavy lifting (distributing the tests over available machines in a cluster, collecting results, etc). We didn't want to reinvent that rich-value-added wheel, so we're making a test case as close to a typical "hand-coded" one as needed to "fool" the test runner into working right for us;-).

Unless you have specific reasons (good test runners or the like) to use unittest's approach for your (integration?) tests, you may find your life is simpler with a different approach. However, this one is quite viable and we're quite happy with its results (which mostly include blazingly-fast runs of large suites of integration/regression tests!-).

like image 21
Alex Martelli Avatar answered Oct 31 '22 21:10

Alex Martelli