I'm currently broadening my Unit Testing by utilising Mock objects (nSubsitute in this particular case). However I'm wondering what the current wisdom when creating a Mock objects. For instance, I'm working with an object that contains various routines to grab and process data - no biggie here but it will be utilised in a fair number of tests.
Should I create a shared function that returns the Mock Object with all the appropriate methods and behaviours mocked for pretty much most of the Testing project and call that object into my Unit Tests? Or shall I Mock the object into every Unit Test, only mocking the behaviour I need for that test (although there will be times I'll be mocking the same behaviour more than one occasion).
Thoughts or advice is gratefully received...
Mocking for unit testing is when you create an object that implements the behavior of a real subsystem in controlled ways. In short, mocks are used as a replacement for a dependency.
Mock objects help isolate the component being tested from the components it depends on and applying mock objects effectively is an important part of test-driven development (TDD). A mock object can be useful in place of a real object that: Runs slowly or inefficiently in practical situations.
You should only mock the behaviour of objects which are necessary for the test to pass. Everything else should be replaced by a dummy or null (if possible).
I'm not sure if there is an agreed "current wisdom" on this, but here's my 2 cents.
First, as @codebox pointed out, re-creating your mocks for each unit test is a good idea, as you want your unit tests to run independently of each other. Doing otherwise can result in tests that pass when run together but fail when run in isolation (or vis versa). Creating mocks required for tests is commonly done in test setup ([SetUp]
in NUnit, constructor in XUnit), so each test will get a newly created mock.
In terms of configuring these mocks, it depends on the situation and how you test. My preference is to configure them in each test with the minimum amount of configuration necessary. This is a good way of communicating exactly what that test requires of its dependencies. There is nothing wrong with some duplication in these cases.
If a number of tests require the same configuration, I would consider using a scenario-based test fixture (link disclaimer: shameless self-promotion). A scenario could be something like When_the_service_is_unavailable
, and the setup for that scenario could configure the mocked service to throw an exception or return an error code. Each test then makes assertions based on that common configuration/scenario (e.g. should display error message, should send email to admin etc).
Another option if you have lots of duplicated bits of configuration is to use a Test Data Builder. This gives you reusable ways of configuring a number of different aspects of your mock or other any other test data.
Finally, if you're finding a large amount of configuration is required it might be worth considering changing the interface of the test dependency to be less "chatty". By looking for a valid abstraction that reduces the number of calls required by the class under test you'll have less to configure in your tests, and have a nice encapsulation of the responsibilities on which that class depends.
It is worth experimenting with a few different approaches and seeing what works for you. Any removal of duplication needs to be balanced with keeping each test case independent, simple, maintainable and reliable. If you find you have a large number of tests fail for small changes, or that you can't figure out the configuration an individual tests needs, or if tests fail depending on the order in which they are run, then you'll want to refine your approach.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With