How to Manage Your Quality Technical Debt with Test Gap Analytics

Home/Blog/How to Manage Your Quality Technical Debt with Test Gap Analytics

How to Manage Your Quality Technical Debt with Test Gap Analytics

Amir SchwartzVP R&D | February 18, 2019

What is Technical Debt?

Technical debt refers to the differences between a system or solution’s current state and its ideal state.  In software development, it is the resultant effect of trade-offs that occur when development teams choose the short-term benefits of using suboptimal code to increase release velocity over the long-term value of coding high-quality solutions that can easily be modified, repaired, and maintained.

Technical debt usually occurs in situations where compromises were made during the development phase to achieve certain business objectives and gain mileage in one dimension at the expense of excellence in another dimension, thus leading to poorly designed and sub-optimal codes.

Previously used to describe the consequences of using quick and dirty coding practices during software development, the term “technical debt” is now applied to other areas in IT development including architecture, infrastructure, integration, among others.

Although technical debt is essentially a liability, savvy software development firms have been leveraging it to speed up the velocity of their development teams and reduce time to completion of clients’ projects.

Types of Technical Debt

There are various kinds of technical debt in software development, distinguished by developers’ level of awareness and reason for incurring said debt. They are separated into the following quadrants:

-In the deliberate and prudent quadrant, dev teams know that they are taking on debt and they judiciously weigh the benefits of faster release against the cost of paying off the debt later on. This kind of debt represents quality shortcuts taken by a team due to an imminent deadline. In this instance, the dev team usually plans on paying off the technical debt reduction at a later date.

-In the deliberate and reckless quadrant, the team consciously decides to use quick and dirty coding practices to get the product to the client faster. However, they continuously incur technical debt without regards to the consequences, even when they’re no imminent deadlines. In most cases, there’s no plan to pay off the debt at a future date and there is no preparation for technical debt management.

-In the prudent and accidental quadrant, dev teams have no idea that they are incurring technical debt during product development. They only discover such debt when they are performing due diligence on the solution. Once it’s discovered, the team tries to understand how and why the technical debt came about and resolve to design or code better in the future to prevent the further occurrence of such debt. One way to prevent future technical debt is to enforce and adhere to your definition of done.

-The reckless and accidental quadrant is the least desirable of the four. In this instance, dev teams either do not know/care they’ve incurred debt or cannot correct it once discovered.

Is technical debt good for your software development teams?

Technical debt is something that dev teams can’t live without — it helps them to focus on the more important parts of product development resulting in increased velocity and faster releases. For instance, teams can decide to put off improving a software feature that works if available information indicates that there is no time advantage or added benefit that would be gained from doing so.

Although it increases velocity, lowers the short-term costs of projects and keeps software implementation or development projects on schedule, technical debt quadrant can lead to serious problems if left unaddressed. Dev teams that do not approach technical debt correctly will experience a dramatic decrease in the quality of their products.

Before deciding to accrue technical debt, teams should carefully weigh the pros and cons and then create a plan for technical debt management. In some cases, circumstances may force development teams to address technical debt whether they want to or not. In such a situation, the cost of bringing the software up-to-date will likely be massive.

Where is your technical debt located?

Although dev teams prefer to use unit tests to close technical debt in agile (since they’re very cheap), they are insufficient for the task. This is because unit tests cannot carry out end to end validation that is essential to the provision of high-quality regression safety nets.

For that, there are other kinds of tests including integration, regression, end-to-end tests, and so on. However, these tests are expensive to create, run and maintain. Furthermore, the short, continuously iterative development practices that characterize Agile methodologies means that QA teams must write and run lots of these expensive tests for each software components, and before every release.

This has made testing a bottleneck for fast-paced software development organizations, resulting in reduced velocity, inefficiencies and waste of engineering resources (due to the writing of redundant tests). The most efficient way for developer productivity to increase velocity while ensuring the delivery of quality products is to focus their efforts on the development and running of these expensive tests on the areas that matter the most (where technical debt resides). However, most managers don’t know how to measure technical debt.

This is where SeaLights’ Quality Intelligence Platform comes in.

SeaLights’ Quality Intelligence Platform: Test Gap Analytics

One of SeaLights’ functionalities — Test Gap Analytics — helps dev teams to maintain low-risk technical debt by enabling QA engineers to write and execute tests for only those areas that need them the most.

These areas are referred to as test gaps – areas of code that were either not tested in a certain time period or in a certain build. By their nature, test gaps are silos containing high-risk technical debt and are usually found in areas where the code was recently changed or executed in production, but have not been tested in any regression cycles.

Identifying test gaps helps QA teams to focus their test development on the areas that are more likely to contain bugs while ignoring other areas with confidence. In essence, Test Gap Analytics help to facilitate data-supported risk-based testing.

SeaLights’ Test Gap Analytics leverages big data analytics to help dev teams improve software quality by identifying what should be tested and areas where new test development should be executed (i.e. the test gaps). Test Gap Analytics works like a funnel in which each layer of information it provides is an additional prism that allows teams to focus attention on the areas of code that matter the most.

In this instance, the first prism that is provided is test coverage which is based on a specific test type, all the different test types or only the expensive test types that are run on an application during testing. SeaLights is able to reveal the code coverage no matter the test framework being used. However, this prism shows dev teams all the theoretical debt they’ve incurred — debt that would be impractical to close. To drill down the data shown in this prism, SeaLights provides additional prisms.

The second prism provides data on code modification. It identifies code that was modified either during a specific sprint or in any time period of choice but has not been tested. When the data provided by the two prisms are overlapped, the areas of code that are common to both are called test gaps; code areas that were modified or added recently and were not tested by any test type. These are the areas of code that matter the most and represent a much higher risk to software quality.

The third prism represents important areas of code. SeaLights’ production agent enables organizations to expose usage data as well as data that comes from production, enabling dev teams to know which methods were used and which aren’t.

Combining all three prisms enables organizations to determine the files and methods (in vital areas) that were modified but not tested.

QA managers should leverage the Test Gap Analytics report during sprint planning sessions. It is a resource that exposes and points dev teams towards their technical debt, enabling them to ensure the delivery of higher-quality products to clients.

It helps team leaders make data-driven decisions on whether they can live with the amount of technical debt in a specific component or whether they should close it right now in this sprint velocity or later on.

Conclusion

Test Gap Analytics transforms the way dev teams operate from a reactive approach of resolving issues arising from untested code to a proactive method of preventing problems before they occur. Developers and QA engineers can leverage smart data to ensure efficient sprint planning and make accurate decisions on where and when tests should be written and executed.

By doing so, they can expose those areas that result in high-risk technical debt in agile and focus their test development efforts on reducing the level of risk and exposure.