I have typically had a 1:1 mapping between my product assemblies and my unit test assemblies. I generally try to keep the overall number of assemblies low, and a typical solution may look something like...
Lately at work, people have been mentioning having just a single Unit Test project vs. breaking them down by the assembly that they are testing. I know back in the day, this made life easier if you were running NCover etc as part of your build (no longer matters of course).
What is the general rational behind single vs. multiple UnitTest projects? Other than reducing the number of projects in a solution, is there a concrete reason to go one way or the other? I get the impression this may be one of those "preference" things, but Googling hasn't turned up much.
I write at least one test per method, and somtimes more if the method requires some different setUp to test the good cases and the bad cases. But you should NEVER test more than one method in one unit test. It reduce the amount of work and error in fixing your test in case your API changes.
This guideline is much more aggressive and recommended if you work in a test driven manner rather than write the tests after the code has been written. The main goal here is better code coverage or test coverage.
Tests should never depend on each other. If your tests have to be run in a specific order, then you need to change your tests. Instead, you should make proper use of the Setup and TearDown features of your unit-testing framework to ensure each test is ready to run individually.
It isn't realistic -- or necessary -- to expect 100% code coverage through unit tests. The unit tests you create depend on business needs and the application or applications' complexity. Aim for 95% or higher coverage with unit tests for new application code.
There is no definite answer because it all depends on what you work on as well as personal taste. However, you definitely want to arrange things in a way so you can work effectively.
For me this means, I want to find things quickly, I want to see what tests what, I want to run smaller things to have better control in case I want to profile or do other things on the tests as well. This is usually good when you're debugging failing tests. I don't want to spend extra time figuring anything out, it should speak for itself how things are mapped and what belongs to what.
Another very important thing for me is that I want to isolate as much as possible and have clear boundaries. You want to provide an easy way to refactor/move out parts of your big project into an independent project.
Personally, I always arrange my tests around how my software is structured which means a one-to-one mapping between class and its tests, library and test executable. This gives you a nice test structure which mirrors your software structure, which in turn provides clarity for finding things. In addition, it provides a natural split in case something is moved out independently.
This is my personal choice after trying various ways to do things.
In my opinion, grouping things when there are too many is not necessarily a good thing. It can be, but I believe in the context of this discussion it is the wrong argument for a single test project. Too many test projects with many files inside means just one with a lot of test files. I believe the real problem is that the solution you're working on is getting big. Maybe there are other things you can do to avoid having "one world"? :)
In addition to the other (good) answers, consider that on larger project teams individual team members may create their own solutions to include only the subset of projects they are working on.
Assuming a monolithic solution, with one test project covering everything in that solution, breaks down in that scenario.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With