Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Nondeterminism in Unit Testing

It seems that in many unit tests, the values that parameterize the test are either baked in to the test themselves, or declared in a predetermined way.

For example, here is a test taken from nUnit's unit tests (EqualsFixture.cs):

[Test]
public void Int() 
{
    int val = 1;
    int expected = val;
    int actual = val;

    Assert.IsTrue(expected == actual);
    Assert.AreEqual(expected, actual);
}

This has the advantage of being deterministic; if you run the test once, and it fails, it will continue to fail until the code is fixed. However, you end up only testing a limited set of values.

I can't help but feel like this is a waste, though; the exact same test is probably run with the exact same parameters hundreds if not thousands of times across the life of a project.

What about randomizing as much input to all unit tests as possible, so that each run has a shot of revealing something new?

In the previous example, perhaps:

[Test]
public void Int() 
{
    Random rnd = new Random();
    int val = rnd.Next();
    int expected = val;
    int actual = val;
    Console.WriteLine("val is {0}", val);
    Assert.IsTrue(expected == actual);
    Assert.AreEqual(expected, actual);
}

(If the code expected a string, perhaps a random string known to be valid for the particular function could be used each time)

The benefit would be that the more times you run a test, the much larger set of possible values you know it can handle correctly.

Is this useful? Evil? Are there drawbacks to this? Am I completely missing the point of unit testing?

Thank you for your thoughts.

like image 753
rh. Avatar asked Feb 23 '10 01:02

rh.


People also ask

What is determinism in software testing?

A program is deterministic, or repeatable, if it produces the very same output when given the same input no matter how many times it is run. Refining this definition, we should consider whether a program produces the same result on any platform (32 and 64 bits machines, running Windows, Mac OS, Linux, Solaris, etc).

What are the two types of unit testing techniques?

There are 2 types of Unit Testing: Manual, and Automated.

What makes a unit test self validating?

Self-validating: Each test will have a single boolean output of pass or fail. It should not be up to you to check whether the output of the method is correct each time the test is run.


2 Answers

You want your unit tests to be repeatable so that they will always behave in the same way unless the code changes. Then, if the code changes and causes the unit test to fail, you can fix the code and the unit test has served its purpose. Futhermore, you know that the code is [probably] fixed when the unit test passes again.

Having random unit tests could find unusual errors, but it shouldn't be necessary. If you know how the code works (compare white box and black box approaches to testing), using random values shouldn't ever show anything that well thought of non-random unit tests would. And I'd hate to be told "run the tests a few times and this error should appear".

like image 198
David Johnstone Avatar answered Sep 22 '22 06:09

David Johnstone


What you are proposing makes a lot of sense, provided that you do it correctly. You don't necessarily always have to listen only to conventional wisdom that says that you must never have non-determinism in your tests.

What is really important is that each test must always exercise the same code path. That is not quite the same thing.

You can adopt what I call Constrained Non-Determinism in unit testing. This can drive you towards a more Specification-Oriented way of writing tests.

like image 30
Mark Seemann Avatar answered Sep 20 '22 06:09

Mark Seemann