Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to adopt TDD and ensure adherence?

I'm a senior engineer working in a team of four others on a home-grown content management application that drives a large US pro sports web site. We have embarked upon this project some two years ago and chose Java as our platform, though my question is not Java-specific. Since we started, there has been some churn in our ranks. Each one of us has a significant degree of latitude in deciding on implementation details, although important decisions are made by consensus.

Ours is a relatively young project, however we are already at a point when no single developer knows everything about the app. The primary reasons for that are our quick pace of development, most of which occurs in a crunch leading up to our sport's season opener; and the fact that our test coverage is essentially 0.

We all understand the theoretical benefits of TDD and agree in principle that the methodology would have improved our lives and code quality if we had started out and stuck with it through the years. This never took hold, and now we're in charge of an untested codebase that still requires a lot of expansion and is actively used in production and relied upon by the corporate structure.

Faced with this situation, I see only two possible solutions: (1) retroactively write tests for existing code, or (2) re-write as much of the app as is practical while fanatically adhering to TDD principles. I perceive (1) as by and large not practical because we have a hellish dependency graph within the project. Almost none of our components can be tested in isolation; we don't know all the use cases; and the uses cases will likely change during the testing push due to business requirements or as a reaction to unforeseen issues. For these reasons, we can't really be sure that our tests will turn out to be high quality once we're done. There's a risk of leading the team into a false sense of security whereby subtle bugs will creep in without anyone noticing. Given the bleak prospects with regards to ROI, it would be hard for myself or our team lead to justify this endeavor to management.

Method (2) is more attractive as we'll be following the test-first principle, thus producing code that's almost 100% covered right off the bat. Even if the initial effort results in islands of covered code at first, this will provide us with a significant beachhead on the way to project-wide coverage and help decouple and isolate the various components.

The downside in both cases is that our team's business-wise productivity could either slow down significantly or evaporate entirely during any testing push. We can not afford to do this during the business-driven crunch, although it's followed by a relative lull which we could exploit for our purposes.

In addition to choosing the right approach (either (1), (2), or another as of yet unknown solution), I need help answering the following question: How can my team ensure that our effort isn't wasted in the long run by unmaintained tests and/or failure to write new ones as business requirements roll on? I'm open to a wide range of suggestions here, whether they involve carrots or sticks.

In any event, thanks for reading about this self-inflicted plight.

like image 716
Max A. Avatar asked Feb 26 '10 13:02

Max A.


People also ask

How do you encourage TDD?

Start small and simple: try, discuss, repeat. Use canned examples first, then continue to move incrementally closer to their daily, “real-world” challenges. Repetition of both TDD and the messages about its goals will help the ideas stick.


1 Answers

"The downside in both cases is that our team's business-wise productivity could either slow down significantly or evaporate entirely during any testing push."

This is a common misinterpretation of the facts. Right now you have code you don't like and struggle to maintain. "hellish dependency graph", etc.

So, the "crunch" development you've been doing has lead to expensive rework. Rework so expensive you don't dare attempt it. That says that your crunch development isn't very effective. It appears cheap at the time, but in retrospect, you note that you're really throwing development money away because you've created problematic, expensive software instead of creating good software.

TDD can change this so that you aren't producing crunch software that's expensive to maintain. It can't fix everything, but it can make it clear that changing your focus from "crunch" can produce better software that's less expensive in the long run.

From your description, some (or all) of your current code base is a liability, not an asset. Now think what TDD (or any discipline) will do to reduce the cost of that liability. The question of "productivity" doesn't apply when you're producing a liability.

The Golden Rule of TDD: If you stop creating code that's a liability, the organization has a positive ROI.

Be careful of asking how to keep up your current pace of productivity. Some of that "productivity" is producing cost with no value.

"Almost none of our components can be tested in isolation; we don't know all the use cases"

Correct. Retro-fitting unit tests to an existing code base is really hard.

"There's a risk of leading the team into a false sense of security whereby subtle bugs will creep in without anyone noticing"

False. There's no "false sense of security". Everyone knows the testing is rocky at best.

Further, now you have horrifying bugs. You have problems so bad you don't even know what they are, because you have no test coverage.

Trading up to a few subtle bugs is still a huge improvement over code you cannot test. I'll take subtle bugs over unknown bugs any day.

"Method (2) is more attractive"

Yes. But.

Your previous testing efforts were subverted by a culture that rewards crunch programming.

Has anything changed? I doubt it. Your culture still rewards crunch programming. Your testing initiative may still get subverted.

You should look at a middle ground. You can't be expected to "fanatically adhering to TDD principles" overnight. That takes time, and a significant cultural change.

What you need to do is break your applications into pieces.

Consider, for example, the Model - Services - View tiers.

You have core application model (persistent things, core classes, etc.) that requires extensive, rigorous trustworthy testing.

You have application services that require some testing, but are subject to "the uses cases will likely change during the testing push due to business requirements or as a reaction to unforeseen issues". Test as much as you can, but don't run afoul of the imperative to ship stuff on time for the next season.

You have view/presentation stuff that needs some testing, but isn't core processing. It's just presentation. It will change constantly as people want different options, views, reports, analysis, RIA, GUI, glitz, and sizzle.

like image 117
S.Lott Avatar answered Sep 30 '22 04:09

S.Lott