Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Data-driven DUnit testing

The way DUnit normally works is you write some published methods, and DUnit runs them as tests. What I want to do is a little different. I want to create tests at run time based on data. I'm trying to test a particular module that processes input files to creates output files. I have a set of test input files with corresponding known good output files. The idea is to dynamically create tests, one for each input file, that process the inputs and check the outputs against the known good ones.

The actual source of data here however isn't important. The difficulty is making DUnit behave in a data-driven way. For the sake of this problem, suppose that the data source were just a random number generator. Here is an example concrete problem that gets to the heart of the difficulty:

Create some test objects (TTestCase or whatever) at runtime, say 10 of them, where each one

  1. Is named at run time from a randomly generated integer. (By 'name' I mean the name of the test that appears in the test-runner tree.)
  2. Passes or fails based on a random integer. Pass for even, fail for odd.

From the design of DUnit, it looks like it was designed with enough flexibility in mind to make such things possible. I'm not sure that it is though. I tried to create my own test class by inheriting from TAbstractTest and ITest, but some crucial methods weren't accessible. I also tried inheriting from TTestCase, but that class is closely tied to the idea of running published methods (and the tests are named after the methods, so I couldn't just have a single one called, say, 'go', because then all my tests would be called 'go', and I want all my tests to be individually named).

Or alternatively, is there some alternative to DUnit that could do what I want?

like image 259
dan-gph Avatar asked Apr 01 '09 11:04

dan-gph


People also ask

What is meant by data-driven testing?

What is Data-Driven Testing? Data-driven testing (DDT) is data that is external to your functional tests, and is loaded and used to extend your automated test cases. You can take the same test case and run it with as many different inputs as you like, thus getting better coverage from a single test.

What is data-driven testing in API testing?

Data-driven testing is a testing methodology where the test case data is separated from the test case logic. You create a series of test scripts that see the same test steps performed repeatedly in the same order, but with a variation of data.

What is difference between unit test and TDD?

TDD is a broader concept than unit tests. TDD is a software development approach focused on understanding the problem domain and fulfilling the requirements. Bare unit tests are about validating the written source code and avoiding bugs and regression. In fact, unit tests are part of the TDD cycle.


2 Answers

program UnitTest1;

{$IFDEF CONSOLE_TESTRUNNER}
{$APPTYPE CONSOLE}
{$ENDIF}

uses
  Forms, Classes, SysUtils,
  TestFramework,
  GUITestRunner,
  TextTestRunner;

{$R *.RES}

type
  TIntTestCase = class(TTestCase)
  private
    FValue: Integer;
  public
    constructor Create(AValue: Integer); reintroduce;
    function GetName: string; override;
  published
    procedure Run;
  end;

{ TIntTestCase }

constructor TIntTestCase.Create(AValue: Integer);
begin
  inherited Create('Run');
  FValue := AValue;
end;

function TIntTestCase.GetName: string;
begin
  Result := Format('Run_%.3d', [FValue]);
end;

procedure TIntTestCase.Run;
begin
  Check(FValue mod 2 = 0, Format('%d is not an even value', [FValue]));
end;

procedure RegisterTests;
const
  TestCount = 10;
  ValueHigh = 1000;
var
  I: Integer;
begin
  Randomize;
  for I := 0 to TestCount - 1 do
    RegisterTest(TIntTestCase.Create(Random(ValueHigh) + 1));
end;

begin
  Application.Initialize;
  RegisterTests;
  if IsConsole then
    TextTestRunner.RunRegisteredTests
  else
    GUITestRunner.RunRegisteredTests;
end.
like image 137
Ondrej Kelle Avatar answered Oct 18 '22 22:10

Ondrej Kelle


I'd say you basically want to have a single "super-test" function that then calls other tests, one for each data file. This is what we do with one of our DUnit tests. You just load each available file in turn in a loop, and run the test with Check's as appropriate.

The alternative, which we also use in the same project to test the final app and its data loading and analysis, is to use something like FinalBuilder to loop on the application (presumably you could loop on the DUnit app too and use a parameter) with the various different data files. The app then runs, does an analysis, then quits after saving. Another app then compares the resulting data with the ideal data, and reports a failure if appropriate.

like image 36
mj2008 Avatar answered Oct 18 '22 22:10

mj2008