Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best Option for Retrospective application of TDD into C# codebase

I have an existing framework consisting of 5 C# libraries, the framework is well used since 2006 and is the main code base to the majority of my projects. My company wishes to roll out TDD for reasons of software quality; having worked through many tutorials and reading the theory I understand the benefits of TDD.

Time is not unlimited I need to make plans for a pragmatic approach to this. From what I know already, the options as I see them are:

A) One test project could be used in order to overlap objects from all 5 library components. A range of high level tests could be a starting point to what is first seen as a very large software library.

B) A test project for each of the 5 library components. The projects will be testing functions at the lowest level in isolation of the other library components.

C) As the code is widely regarded as working, only add unit tests to bug fixes or new features. Write a test that fails on the logic that has the bug in it with the steps to reproduce the bug. Then re-factor the code until the tests pass. Now you can have confidence that the bug is fixed and also it will not be introduced later on in the cycle

Whichever option is chosen, "Mocking" may be needed to replace external dependencies such as:

  • Database
  • Web Service
  • Configuration Files

If anybody has any more input this would be very helpful. I plan to use Microsoft's inbuilt MSTest in Visual Studio 2010.

like image 423
Paul Avatar asked Oct 10 '11 16:10

Paul


1 Answers

We have a million-and-a-half line code base. Our approach was to start by writing some integration tests (your option A). These tests exercise almost the whole system end-to-end: they copy database files from a repository, connect to that database, perform some operations on the data, and then output reports to CSV and compare them against known-good output. They're nowhere near comprehensive, but they exercise a large number of the things that our clients rely on our software to do.

These tests run very slowly, of course; but we still run all of them continuously, six years later (and now spread across eight different machines), because they catch things that we still don't have unit tests for.

Once we had a decent base of integration tests, we spent some time adding finer-grained tests around the high-traffic parts of the system (your option B). We were given time to do this because there was a perception of poor quality in our code.

Once we had improved the quality to a certain threshold, they started asking us to do real work again. So we settled into a rhythm of writing tests for new code (your option C). In addition, if we need to make changes to an existing piece of code that doesn't yet have unit tests, we might spend some time covering existing functionality with tests before we start making changes.

All of your approaches have their merits, but as you gain test coverage over time, the relative payoffs will change. For our code base, I think our strategy was a good one; integration tests will help catch any errors you make when trying to break dependencies to add unit tests.

like image 126
Joe White Avatar answered Oct 20 '22 12:10

Joe White