Defect Density: Context is King.

The SeaLights test metrics guide
for better and faster CI/CD

Gathering metrics is one of the most fraught parts of software development. Managers need to succinctly understand how a team is performing, but carefully consider how they are collected and analyzed. For example, defect density is simply the number of defects per lines of code. It is not that useful on its own, but like all metrics, it’s infinitely more valuable when it is combined with other testing metrics.

More Code is Bad Code

The most common standard of “good” defect density is one defect per 1000 lines of code (or KLOC).However, this is a totally arbitrary standard: what if a team spends time refactoring and optimizing, substantially reducing the size of your code base? You might still have the same number of defects overall, but your defect density will go way up.

Optimizing Test Automation | Download White Paper >>

On the other hand, what if a team writes a lot of sloppy code, generating thousands of lines of code but introducing a bevy of new defects? The defect density might stay constant or even go down, even though that is exactly the kind of sloppy work that test metrics are meant to discourage.

These two cases highlight the main issue with defect density as a metric: simply writing code without addressing existing defects will still reduce defect density. Focusing too much on defect density will encourage teams to write more code, instead of writing smarter code.

The chart below shows an example:

In the top chart, you can see that even though defect density is trending down, the code base is increasing at roughly the same rate as the number of defects. Reducing the code base (in November 2016) resulted in a spike in defect density.

Add Context to Your Metrics

Based on these two cases, clearly defect density is a limited tool for evaluating code quality by itself, but there are easy ways to make it more valuable. For example, chart the size of the code base and number of defects separately, along with the defect density ratio, like the chart used above. This way, you get the context of the defect density, without losing the clarity of the defect density trend. This solves both of the problems mentioned previously: you can see how the code base is trending on its own, and if that is having a positive or negative effect on the defect density trend.

Testing: The Next Bottleneck in Continuous Delivery | Download >>

You should also be measuring test coverage alongside defect density, to ensure that the team is not missing any defects. Test coverage measures how much of the code base is being tested sufficiently. Measuring test coverage is a process; it requires consistent collaboration between testers and developers to ensure that all scenarios are documented and tested. But it is vital for ensuring confidence in defect density. A low defect density can indicate that the team is working well, but can also signal that test coverage is too low, which can then trigger a thorough test review. As a practice, it can feel time-consuming and tedious to constantly pair tester with a developer, but if reducing defects is your goal, it is much more effective than simply measuring defect density on its own.

Perhaps the most important consideration with defect density is to be extremely wary when defect density is zero. This almost always means that the defects are there, but the team just isn’t finding them.

My attitude towards using defect density as a metric is the same as my attitude towards all metrics: they usually do not say much on their own. There is no magic number that will tell you exactly how well a software development team is performing. But you can get close when you spend a little more time looking at the whole picture: test coverage, defect density, and code coverage.

Beyond Defect Density and Traditional Code Coverage Metrics

Defect density and many other metrics for measuring the extent of testing are limited and require complex analysis to derive real insights. What would be truly useful is a holistic measurement of test coverage, and go beyond unit tests to include integration tests, acceptance tests, and manual tests as well. Traditionally there has been no easy way to see a unified test coverage metric across all types of tests and all test systems in one place.

SeaLights is a continuous testing platform that makes this possible. It integrates with all your testing tools and allows you to measure holistic test coverage across all types of tests. This provides an answer to a crucial question: how extensively is your product tested, and how much risk exists in terms of untested code changes that are shipping with your next release. Check out our free trial.

Request For Demo