Test Driven Development and the Dangers of Hidden Technical Debt
Is Test Driven Development (TDD) a silver bullet for technical debt? In this article we show that TDD does not prevent technical debt. In fact, the TDD technique itself can create layers of hidden technical debt, which, if not caught on time, can have disastrous consequences.
This is part of our series of articles on technical debt:
What is Test Driven Development?
Test Driven Development (TDD) is a development practice intended to encourage discipline, structure, and quality code. It works in three stages, known as the “TDD triangle”:
- Red—write a test which checks if the required functionality works. Initially the test fails because the functionality does not exist.
- Green—write a basic implementation of the functionality, just enough to make the test pass.
- Refactor—rework your implementation to make it better, cleaner and more efficient.
Test Driven Development: A Silver Bullet for Technical Debt?
Many in the software industry believe that TDD is a solution for technical debt.
The argument goes as follows:
- Organizations strictly practicing TDD have unit testing for all their code
- This ensures all functionality is well thought out
- It also tests there are no loose ends or obvious failures of core functionality
- Any lapses in the code, which represent technical debt, will cause tests to fail and break the build
- This guarantees no technical debt
Does this argument hold water? As many in the industry know, it doesn’t.
We’ll show that in every stage of the TDD cycle, hidden technical debt can accumulate.
This isn’t because there’s something wrong with TDD. It can happen in any development method. The problem with TDD is that is creates a false sense of security, leading teams to thinking they have less technical debt than they actually do.
How Can Hidden Technical Debt Accumulate with TDD?
Let’s take a closer look at each of the three stages of TDD.
Red Stage—Hidden Technical Debt
What happens in this stage: Developers write tests, anticipating the functionality they will build in the next stage.
What can go wrong: In the “red” stage, developers don’t know what the final implementation will look like. They are not aware of new requirements that might be introduced in peer review, managerial review or via customer comments. They also cannot anticipate all the possible interactions of the current module with other modules, especially not with future modules that don’t yet exist.
How it leads to technical debt: Because so much is not known upfront, developers cannot possibly build tests that will cover all possible scenarios. It’s true that TDD creates a “safety harness” that can catch major errors. However, many quality issues may exist in the code that are specific to the implementation, and which were not taken into account in the test stage.
Green Stage—Hidden Technical Debt
What happens in this stage: Developers write code that satisfies customer requirement (at least at a minimum level) and causes tests to pass.
What can go wrong: A very strong assumption made in the TDD methodology is that tests are good and comprehensive. Not all tests actually test the right things, or do so in an intelligent way. Some developers do not have great testing skills (while they may be excellent developers), or they may be great at testing but do not have enough time to design a perfect test.
How it leads to technical debt: If the unit tests do not perfectly model functionality, there may be important things missing in the implementation—and the test will still pass.
Refactor Stage—Hidden Technical Debt
What happens in this stage: Developers improve their implementation, ensuring that tests still pass, but code is more elegant and efficient at solving the problem at hand.
What can go wrong: Martin Fowler put it succinctly in an entry on his Bliki in 2005:
The most common way that I hear to screw up TDD is neglecting the third step. Refactoring the code to keep it clean is a key part of the process, otherwise you just end up with a messy aggregation of code fragments. (At least these will have tests, so it’s a less painful result than most failures of design.)
How it leads to technical debt: Even in software development teams with German precision, there is no such thing as 100% discipline. Some of the developers, at least some of the time, will fail to do the third stage of TDD, or do it only partially. Code written quickly to solve a problem without sufficient refactoring is typically difficult to read, maintain and extend.
It is broadly agreed that software complexity, the primary result of code that is written quickly without refactoring, represents technical debt.
A Healthier Form of TDD: Focus on What Matters Most
In this article we showed how technical debt can creep into each of the three stages of the TDD process. Technical debt can be avoided, with an effort, but for most developers it will be difficult to keep all their code clean, all the time.
What is needed is a prioritization system to help teams understand which parts of the product are at highest risk of quality issues, and focus their efforts there. TDD doesn’t haven’t to be one-size-fits-all. You can invest a special effort covering certain parts of the product with tests, and be a bit more lenient with other parts. This can save a lot of time and make sure that tests really are spot on, where they are needed most.
A new category of solutions called Test Quality Intelligence is helping development teams do just that. SeaLights is a leading quality intelligence platform that analyzes and visualizes which parts of your product are at quality risk. It looks at recent code changes, existing tests covering each feature, and actual production usage of the feature, and identifies “test gaps” where a software component is at risk of failure but does not have sufficient tests.
Using a platform like SeaLights, you can achieve a saner form of TDD—systematic and thorough, but focused on the product areas most likely to affect your customers. Request a live demo of SeaLights to see for yourself.