Test Metrics

Agile Testing Metrics

Testing has gone a long way since the old days of waterfall software development, requirement documents and outsourced QA departments. And so have testing metrics. Many old metrics which QA teams used to live by, such as number of test cases, are not relevant anymore according to many experts. So which metrics are relevant and helpful to improve testing in an Agile development environment?

In this article we will describe the transition from waterfall to Agile testing metrics, describe important Agile development metrics and how they relate to testing, and describe new metrics that are used by Agile teams to quantify their tests and measure software quality.

Testing Metrics Before Agile

In the old waterfall model for software development, QA was a separate activity from development, performed by a separate team. The waterfall model takes a non-iterative approach to development where each stage needs to be completed before the next stage begins. In this model, an independent quality assurance team defines test cases which determine whether the software meets the initial requirements.


In the waterfall model, the QA dashboard focused on the software application as a whole, and measured four key dimensions:

  • Product quality: The number of defects and the rate of defects was measured to gauge software quality.
  • Test effectiveness: Code coverage was checked for insight into test effectiveness. Additionally, QA also focused on requirements-based testing and functional tests.  
  • Test status: The number of tests run, passed, or blocked checked the status of testing.
  • Test resources: The time taken to test software and the cost of that testing.

The modern Agile development environment relies on the collaborative effort of cross-functional teams. Thus, the metrics that took on such importance in the old independent waterfall model are less relevant today—testing is now an integrated part of the entire development process.

With QA teams becoming part of a cross-functional Agile effort, new metrics emerge that reflect this integrated environment. New goals and expectations lead to new metrics that can help the whole team from a united perspective.

There are two types of Agile testing metrics:

  1. General Agile metrics that are also relevant for software tests.
  2. Specific test metrics applicable to an Agile development environment.

General Agile Metrics as Applied to Testing

Sprint Burndown

Agile teams use Sprint Burndown charts to depict a graphical representation of the rate at which teams complete their tasks and how much work remains during a defined sprint period.

The typical burndown chart plots ideal effort hours for completing a task using remaining hours of effort on the y-axis and sprint dates on the x-axis. The Agile team then plots the actual remaining hours for the sprint.

In the below diagram, for example, the team fails to complete the sprint on time, leaving 100 hours of work left to finish.

Image source

Relevance to testing:

  • Testing usually forms part of the definition of done exit-criteria used by Agile teams.
  • The definition of done might include a condition such as “tested with 100 percent unit test code coverage”.
  • Because every “story” completed by an Agile team must also be tested, stories completed reflect progress in testing the key features required by the customer.

Number of Working Tested Features / Running Tested Features

The Running Tested Features (RTF) metric tells you how many software features are fully developed and passing all acceptance tests, thus becoming implemented in the integrated product.

The RTF metric for the project on the left shows more fully developed features as the sprint progresses, making for a healthy RTF growth. The project on the right appears to have issues, which may arise from factors including defects, failed tests, and changing requirements.

Relevance to testing:

  • Since RTF metrics measure features that have undergone comprehensive tests, all features included in the metric have passed all of their tests.
  • More features shipped to the customer means more parts of the software have been tested.


Velocity takes a mathematical approach to measure how much work a team completes on average during each sprint, comparing the actual completed tasks with the team’s estimated efforts.

Agile managers use the velocity metric to predict how quickly a team can work towards a certain goal by comparing the average story points or hours committed to and completed in previous sprints.

Regression Test Code Coverage | Download White Paper >>

Image source

Relevance to testing:

  • The quicker a team’s velocity, the faster that team produces software features. Thus higher velocity can mean faster progression with software testing.
  • The caveat with velocity is that technical debt can skew the velocity metric. Teams might leave gaps in the software, including gaps in their test automation, and might choose easier, faster solutions to problems that might be partial or incorrect.

Cumulative Flow

The Cumulative Flow Diagram (CFD) shows summary information for a project, including work-in-progress, completed tasks, testing, velocity, and the current backlog.

The following diagram allows you to visualize bottlenecks in the Agile process: Colored bands that are disproportionately fat represent stages of the workflow for which there is too much work in progress. Bands that are thin represent stages in the process that are “starved” because previous stages are taking too long.

Image source

Relevance for testing:

  • Testing is part of the Agile workflow, and it is included in most Cumulative Flow Diagrams.
  • By using a CFD, you can measure the progress of software testing.
  • CFDs may be used to analyze whether testing is a bottleneck or whether other factors in the CFD are bottlenecks, which might affect testing.
  • A vertical area in your CFD that widens over time indicates the presence of a bottleneck.

Earned Value Analysis

Earned Value Management encompasses a series of measurements to compare a planned baseline value before the project begins with actual technical progress and hours spent on the project.

The comparison is typically in the form of a dollar value, and it requires particular preparation to use in an Agile framework, incorporating story points to measure Earned value and planned value. You can use EVM methods at many levels, from the single task level to the total project level.

Image source

Relevance to testing:

  • You can use EVM techniques in Agile to determine the economic value of your software testing.
  • At the single task level, EVM methods help you understand whether your software tests are cost-effective by comparing planned value with earned value.

Specific Testing Metrics Useful in an Agile Environment

Percentage of Automated Test Coverage

This metric measures the percentage of test coverage achieved by automated testing. As time progresses and more tests get automated, you should expect higher test coverage and, as a result, increased software quality.

Image source

Usefulness for Agile teams:

  • Test automation represents one of the only ways to achieve high test coverage in Agile teams, because test cases grow with the added functionality at each sprint.
  • Automated test coverage provides a basic measure of risk to Agile teams. The more test coverage achieved by automation, the lower the chances of production defects in a release.

Code Complexity & Static Code Analysis

Often derived via a measure named cyclomatic complexity, the code complexity metric counts number of linearly independent paths through a program’s source code.

Static code analysis uses a set of tools to examine the code without executing it. For example, a compiler can analyze code to find lexical errors, syntax mistakes, and sometimes semantic errors. Therefore, static analysis is a good way to check for sound coding standards in a program.

Usefulness for Agile teams:

  • Developers strive to build simple, readable code to reduce defect counts. Therefore, the cyclomatic complexity metric is useful for Agile teams to determine the risk of unstable or error-prone code. Determining complexity lets you evaluate how likely something is to go wrong with the feature or release.
  • Static code analysis tools help Agile teams check the structure of the code used to build a program, ensuring it adheres to established industry standards, such as indentation, inline comments, and correct use of spacing.

Defects Found in Production / Escaped Defects

Escaped defects is a simple metric that counts the defects for a given release that were found after the release date. Such defects have been found by the customer as opposed to the Agile development team. Since escaped defects tend to be quite costly, it’s helpful to analyse them carefully, and strive to see this metric decrease.

Usefulness for Agile teams:

  • Analysing escaped defects helps to ensure continuous improvement in testing and development processes. Defining the root cause of escaped defects helps prevent recurrence of the same issues in subsequent releases.
  • Agile teams can capture the escaped defects metric per unit of time, per sprint, or release, providing specific insights into what went wrong with development or testing in a specific part of the project.

Defect Categories

Knowing whether a defect has been found is not enough—it’s important to categorize bugs to get qualitative information about defects. You can break software defects down into a number of categories, including:

  • Functionality errors
  • Communication errors
  • Security bugs
  • Performance defects

You can group defects into categories and make visual representations of the data using Pareto charts.

Usefulness for Agile teams:

  • Using a Pareto chart and the Pareto principle, you can determine the 20 percent of defect categories that cause 80 percent of the problems with your software.
  • By highlighting the categories with most defects, the team comes to a better understanding of what they need to work on improving.

Defect Cycle Time

Defect cycle time measures how much time elapses between starting work on fixing a bug and fully resolving that bug. An Agile control chart visually represents cycle time across different Agile tasks. The chart below is a control chart indicating cycle time for a specific bug: The x-axis on the chart shows time and the y-axis indicates the number of hours it takes to resolve defects.

Image source

Usefulness for Agile teams:

  • Rapid resolution of defects is conducive to quicker release times in a fast-paced Agile team.
  • By measuring defect cycle time against a defined threshold, you gauge exactly how fast Agile teams resolve issues and whether they are showing the expected progress over an increasing number of sprints or iterations.

Defect Spill-over

Defect spill-over measures the number of defects that don’t get resolved in a given sprint or iteration. You can also measure whether defects spilling over from one sprint get resolved in the next sprint.

Image source

Usefulness for Agile teams:

  • The main goal for agile teams is to produce working software when each iteration completes. Measuring spillover minimizes the chances of teams getting stuck in the future because of a build-up of technical debt.
  • Measuring defect spill-over per sprint helps Agile teams get a clear idea of how efficiently they are dealing with issues.

Note that most test metrics for Agile teams can be measured in a number of ways, such as per:

  • Epic
  • Release
  • Iteration
  • Feature
  • User Story

Towards a Holistic Measurement of Testing in Agile

The Agile methodology, with its iterative team-based approach, calls for new metrics that reflect unique Agile traits. Simply counting test cases and producing bug reports is no longer an indication of quality or value.

Agile development teams are focused on elements such as customer satisfaction, building system health, potential risks to releases, and defects reported by external users. Throughout the discussion, we presented metrics such as sprint burndown, velocity, or defect cycle time, which are designed around these focus areas.

However, Agile teams might find themselves tracking large amounts of isolated metrics. While these metrics are essential for understanding test quality, looking at each of them separately shows an incomplete and sometimes misleading picture.

Request A Demo