Good unit tests should be reproducible and independent from external factors such as the environment or running order. Fast. Developers write unit tests so they can repeatedly run them and check that no bugs have been introduced.
With that being said it is generally accepted that 80% coverage is a good goal to aim for. Trying to reach a higher coverage might turn out to be costly, while not necessary producing enough benefit. The first time you run your coverage tool you might find that you have a fairly low percentage of coverage.
Let me begin by plugging sources - Pragmatic Unit Testing in Java with JUnit (There's a version with C#-Nunit too.. but I have this one.. its agnostic for the most part. Recommended.)
Good Tests should be A TRIP (The acronymn isn't sticky enough - I have a printout of the cheatsheet in the book that I had to pull out to make sure I got this right..)
Professional: In the long run you'll have as much test code as production (if not more), therefore follow the same standard of good-design for your test code. Well factored methods-classes with intention-revealing names, No duplication, tests with good names, etc.
Good tests also run Fast. any test that takes over half a second to run.. needs to be worked upon. The longer the test suite takes for a run.. the less frequently it will be run. The more changes the dev will try to sneak between runs.. if anything breaks.. it will take longer to figure out which change was the culprit.
Update 2010-08:
Apart from these, most of the others are guidelines that cut down on low-benefit work: e.g. 'Don't test code that you don't own' (e.g. third-party DLLs). Don't go about testing getters and setters. Keep an eye on cost-to-benefit ratio or defect probability.
Most of the answers here seem to address unit testing best practices in general (when, where, why and what), rather than actually writing the tests themselves (how). Since the question seemed pretty specific on the "how" part, I thought I'd post this, taken from a "brown bag" presentation that I conducted at my company.
1. Use long, descriptive test method names.
- Map_DefaultConstructorShouldCreateEmptyGisMap()
- ShouldAlwaysDelegateXMLCorrectlyToTheCustomHandlers()
- Dog_Object_Should_Eat_Homework_Object_When_Hungry()
2. Write your tests in an Arrange/Act/Assert style.
3. Always provide a failure message with your Asserts.
Assert.That(x == 2 && y == 2, "An incorrect number of begin/end element
processing events was raised by the XElementSerializer");
4. Comment the reason for the test – what’s the business assumption?
/// A layer cannot be constructed with a null gisLayer, as every function
/// in the Layer class assumes that a valid gisLayer is present.
[Test]
public void ShouldNotAllowConstructionWithANullGisLayer()
{
}
5. Every test must always revert the state of any resource it touches
Keep these goals in mind (adapted from the book xUnit Test Patterns by Meszaros)
Some things to make this easier:
Don't forget that you can do intergration testing with your xUnit framework too but keep intergration tests and unit tests separate
Tests should be isolated. One test should not depend on another. Even further, a test should not rely on external systems. In other words, test your code, not the code your code depends on.You can test those interactions as part of your integration or functional tests.
Some properties of great unit tests:
When a test fails, it should be immediately obvious where the problem lies. If you have to use the debugger to track down the problem, then your tests aren't granular enough. Having exactly one assertion per test helps here.
When you refactor, no tests should fail.
Tests should run so fast that you never hesitate to run them.
All tests should pass always; no non-deterministic results.
Unit tests should be well-factored, just like your production code.
@Alotor: If you're suggesting that a library should only have unit tests at its external API, I disagree. I want unit tests for each class, including classes that I don't expose to external callers. (However, if I feel the need to write tests for private methods, then I need to refactor.)
EDIT: There was a comment about duplication caused by "one assertion per test". Specifically, if you have some code to set up a scenario, and then want to make multiple assertions about it, but only have one assertion per test, you might duplication the setup across multiple tests.
I don't take that approach. Instead, I use test fixtures per scenario. Here's a rough example:
[TestFixture]
public class StackTests
{
[TestFixture]
public class EmptyTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
}
[TestMethod]
[ExpectedException (typeof(Exception))]
public void PopFails()
{
_stack.Pop();
}
[TestMethod]
public void IsEmpty()
{
Assert(_stack.IsEmpty());
}
}
[TestFixture]
public class PushedOneTests
{
Stack<int> _stack;
[TestSetup]
public void TestSetup()
{
_stack = new Stack<int>();
_stack.Push(7);
}
// Tests for one item on the stack...
}
}
What you're after is delineation of the behaviours of the class under test.
The basic intent is increase your confidence in the behaviour of the class.
This is especially useful when looking at refactoring your code. Martin Fowler has an interesting article regarding testing over at his web site.
HTH.
cheers,
Rob
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With