Software test metrics are one of the most common cornerstones of software quality assurance. Everyone calculates it and talks about how it helps them improve both their test and software quality. But, truth be told, test metrics, as we use them today, are more about measuring test activity rather than test and software quality. Take test execution metrics, for example, does a 100% test passed percentage ensures software quality?
So how do we connect between software test metrics and test and software quality? With Continuous Testing. With Continuous Testing, we must implement a new generation of software test metrics that specifically measure quality in our tests to achieve continuous improvement by identifying software quality risks.
Software Test Metrics 1.0 Don’t Cut It Anymore
Before I dive into test metrics 2.0 it’s important to understand two things. First, in no way are test metrics such as test execution irrelevant. They are a foundation that you must have in place in order to measure and calculate quality and efficiency. Version 1.0 is a fundamental requirement to upgrade to version 2.0! Second, the reason test metrics 1.0 don’t cut it anymore is that software development has evolved over time. Frankly, 10 years ago it was easier to draw a relationship between activity and quality. Today, there are two main drivers in play that make it extremely difficult to understand software quality from test metrics. Once you pile on Automated Testing and CI/CD extremely difficult becomes extremely impossible.
Technology advancements over the past years have made applications increasingly useful and easier to build but, the explosion of innovation has also made modern applications complex. New software techniques such as isolated microservices and containers, APIs and SOA web services enable more code reuse and componentized functionality. These components must all work in tangent, even while changes are being made independently to each component, thus making testing much more challenging.
The number of environments that applications need to operate in is also growing, exponentially adding complexity. Not only do applications need to run across a variety of form factors such smartphones, tablets, and laptops but also different operating systems. Application developers and testers must also take into account diverse browsers with multiple versions for each browser. And as if that isn’t enough the explosion of IoT is adding an additional layer of operating environments that developers and testers need to take into account.
Companies across the globe are quickly realizing the importance of engaging with their customers digitally and that this is the key to growth and even their survival. To be successful at this requires they must provide their customers with great digital experiences and that means constantly and consistently ensuring that their applications are better than the competitions. Updates and improvements need to be quickly integrated and released, leading to ever shortening release cycles. Regardless of the shrinking intervals between releases, code still needs to be tested to ensure quality.
As these two drivers intensify, traditional waterfall like QA practices become untenable as there is no longer time to stop and test a “stable state”. At some point, the rising complexity of technology and the reduction of the time between releases will intersect and in order to deliver quality software, Continuous Testing strategies need to be implemented. One might think of this as the “Ground Zero” of Continuous Testing. Competition and innovation are constants, so we do not foresee any reversal or deceleration of the quickening cadence of releases or speed of new technological innovation.
While there are powerful forces driving development teams towards Continuous Testing, there are important enablers that support its adoption. Machine learning and improved data visualization capabilities enable quality management systems to provide greater visibility and insight into areas of applications that, until recently, were out of reach for QA teams.
Leveraging Continuous Testing Metrics to Drive Efficiencies
Behind Continuous Testing lay many best practices, new processes and skill set adaptations (If you need a refresher on what Continuous Testing is and how it differs from test automation see our previous blog post The Great Debate: Automated Testing vs Continuous Testing). But, at the core, the driver of all this is the need to do things smarter, in a way that will reduce testing time without jeopardizing the quality of the application. While there are several ways to test smarter and more effectively, there a few vital modern test quality metrics that provide important insights on how you can directly affect both the quality of your software as well as strategies to advance your QA activities.
The Uncovered Country: What Your Tests Do Not Cover
Which areas in your application (or which microservices) have never been tested (by a Unit Test and/or by a GUI Test and/or by an API Test) in this release?
The Neverending Story: Rapid, Incremental Code Changes
Which areas in your application (or which microservice) had the highest volume of code changes?
Much Ado About Nothing: Untested Code Changes
Which code changes have been tested and how? And, more importantly, which code changes have not been tested?
The common denominator between these test quality metrics is that they focus on the invisible not the visible. It is what you don’t know that will get you in the end as those areas represent the highest risk to software quality.
There are a million ways that Dev, QA and DevOps teams can leverage modern test quality metrics but, here are some common uses where they are being applied to directly improve quality, effectiveness, and efficiency.
1. Identify quality risks: By examining what your tests do not cover you can now identify quality risks within your code. This makes it easy to pinpoint areas with no test, critical areas of risk where there is a high volume of code changes but a low ratio of coverage and, untested code changes. Using this data you can easily identify missing tests as well as focus your teams testing efforts. Once missing tests are identified, the responsible party, typically the developer or automation engineer responsible for the “problematic” area of the system, can create new tests or adjust existing ones. By successfully creating and executing missing tests you guarantee that your regression test suite for each release, is satisfying the coverage perspective and that all code changes have been officially addressed.
2. Eliminate inefficient tests: You can also use coverage metrics to tune-up your automation suite for better performance by reducing time wasted building, maintaining or running low-value tests. By examining each test in the system the test’s coverage contribution can be calculated. It will also indicate whether or not each test was focused on a risky/important area. If it wasn’t, the test might be a prime candidate for removal.
3. Evaluate manual tests: These metrics can also be leveraged to bridge the migration from the traditional world of manual testing to new automated testing techniques. This bridge is possible because manual tests leave tracks, just like automated ones. For example, a QA team leader can use these test markers to validate if his team’s manual or exploratory testing are converging and covering all expected areas in the application. If not, he/she now has the power to direct his/her engineers back to the path that will guarantee maximum coverage and minimize wasted time as well as identify tests that should be automated.
Measuring Test Quality
To prevent quality drops in Continuous Delivery it is critical to measure your tests quality to ensure effectiveness and efficiency. There are three steps in this process:
- Benchmark: Know where you are today, measure your test effectiveness and compare your status to industry standards.
- Baseline: Mark your baseline and identify quality issues based on downward trending metrics, compared to your quality baseline.
- Improve: Improve and optimize your test quality and efficiency over time.
Test quality metrics within the context of Continuous Testing shift your perception of quality away from activity towards efficiency and effectiveness. It is all about identifying risk and minimizing inefficiencies in a constantly moving and evolving target. Once you understand what your baseline is you have a solid basis to maintain and improve quality no matter how much your code changes because it is now possible to identify and manage each and every risk.
In order to get the most out of these three use cases, we at Sealights created dedicated programs where we work with our customers to reach each desired outcome. We work with our clients in their own environment and ensure that they master the process and can repeat it themselves to achieve Continuous Testing.
If you would like to see these software quality metrics in action on your own tests and code and enjoy the benefits shown above reach out to us at firstname.lastname@example.org.