Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Proper way to organize testcases that involve a data file for each testcase?

I'm writing a module that involves parsing html for data and creating an object from it. Basically, I want to create a set of testcases where each case is an html file paired with a golden/expected pickled object file.

As I make changes to the parser, I would like to run this test suite to ensure that each html page is parsed to equal the 'golden' file (essentially a regression suite)

I can see how to code this as a single test case, where I would load all file pairs from some directory and then iterate through them. But I believe this would end up being reported as a single test case, pass or fail. But I want a report that says, for example, 45/47 pages parsed successfully.

How do I arrange this?

like image 831
booshong Avatar asked Aug 16 '11 23:08

booshong


People also ask

How do you organize your test cases?

The best and simple way to organize your test document is by splitting it into many single useful sections. Divide the entire testing into multiple test scenarios. Then divide each scenario into multiple tests. Finally, divide each case into multiple test steps.

What is the basic format of writing a Testcase?

The test case should include its number, a description, the test data, expected result, actual result and status (whether the test passed or failed).


2 Answers

I've done similar things with the unittest framework by writing a function which creates and returns a test class. This function can then take in whatever parameters you want and customise the test class accordingly. You can also customise the __doc__ attribute of the test function(s) to get customised messages when running the tests.

I quickly knocked up the following example code to illustrate this. Instead of doing any actual testing, it uses the random module to fail some tests for demonstration purposes. When created, the classes are inserted into the global namespace so that a call to unittest.main() will pick them up. Depending on how you run your tests, you may wish to do something different with the generated classes.

import os
import unittest

# Generate a test class for an individual file.
def make_test(filename):
    class TestClass(unittest.TestCase):
        def test_file(self):
            # Do the actual testing here.
            # parsed = do_my_parsing(filename)
            # golden = load_golden(filename)
            # self.assertEquals(parsed, golden, 'Parsing failed.')

            # Randomly fail some tests.
            import random
            if not random.randint(0, 10):
                self.assertEquals(0, 1, 'Parsing failed.')

        # Set the docstring so we get nice test messages.
        test_file.__doc__ = 'Test parsing of %s' % filename

    return TestClass

# Create a single file test.
Test1 = make_test('file1.html')

# Create several tests from a list.
for i in range(2, 5):
    globals()['Test%d' % i] = make_test('file%d.html' % i)

# Create them from a directory listing.
for dirname, subdirs, filenames in os.walk('tests'):
    for f in filenames:
        globals()['Test%s' % f] = make_test('%s/%s' % (dirname, f))

# If this file is being run, run all the tests.
if __name__ == '__main__':
    unittest.main()

A sample run:

$ python tests.py -v
Test parsing of file1.html ... ok
Test parsing of file2.html ... ok
Test parsing of file3.html ... ok
Test parsing of file4.html ... ok
Test parsing of tests/file5.html ... ok
Test parsing of tests/file6.html ... FAIL
Test parsing of tests/file7.html ... ok
Test parsing of tests/file8.html ... ok

======================================================================
FAIL: Test parsing of tests/file6.html
----------------------------------------------------------------------
Traceback (most recent call last):
  File "generic.py", line 16, in test_file
    self.assertEquals(0, 1, 'Parsing failed.')
AssertionError: Parsing failed.

----------------------------------------------------------------------
Ran 8 tests in 0.004s

FAILED (failures=1)
like image 188
Blair Avatar answered Oct 02 '22 14:10

Blair


The nose testing framework supports this. http://www.somethingaboutorange.com/mrl/projects/nose/

Also see here: How to generate dynamic (parametrized) unit tests in python?

Here's what I would do (untested):

files = os.listdir("/path/to/dir")

class SomeTests(unittest.TestCase):

    def _compare_files(self, file_name): 
        with open('/path/to/dir/%s-golden' % file_name, 'r') as golden:  
            with open('/path/to/dir/%s-trial' % file_name, 'r') as trial:    
                assert golden.read() == trial.read()     


def test_generator(file_name):
    def test(self):
        self._compare_files(file_name):
    return test

if __name__ == '__main__':
    for file_name in files:
        test_name = 'test_%s' % file_name
        test = test_generator(file_name)
        setattr(SomeTests, test_name, test)
    unittest.main()
like image 29
nottombrown Avatar answered Oct 02 '22 14:10

nottombrown