(1) Background: What is Regression Testing?
Regression testing is carried out to ensure that a system or an “Application Under Test” (AUT) behaves as expected after enhancements or bug fixes. Regression testing usually refers to testing activities completed during the software maintenance phase. The key objectives of regression testing include retesting the changed components or parts and then checking the affected parts and components. Regression testing is performed at different levels: unit, integration, functional, and system
Regression testing is needed for various reasons such as:
- Incremental code changes in a project or a release
- Major releases or projects going live
- Emergency production fixes
- Configuration and environment changes
(2) Need for Automated Regression Testing Suite
As we have discussed, regression tests are carried out to ensure changes in application do not disrupt currently functioning parts of the application. While there is always an effort to optimize the regression suite, there is also an attempt to provide required coverage to ensure application does not break down in production.
Distributed agile teams typically are characterized with ever-increasing size of regression test suite.
Given there are tests (e.g. core features, critical functionality) that need to be repeatedly executed in each regression run, it is required that these regression test cases are automated.
The focus should be on automated regression testing for high-risk areas using risk-based testing. Automation should be leveraged to the maximum extent. As a thumb rule, any test that should be used 5+ times in future should be automated to deliver ROI.
There are number of automated regression testing products, starting from HPE Microfocus UFT to regression, Smartbear, Tosca, to name a few. In Agile development environment, the new breed of tools used for automation include Cucumber, Gherkin, etc.
(3) Software Regression Process
The software regression process steps include –
- Analyzing the changes in software
- Analyzing impact resulting from these changes
- Defining regression testing strategy to minimize the impact
- Building regression test suite
- Executing regression tests at different levels – unit, integration, functional and system
(4) How Should One Choose Test Cases for Regression?
Choosing test cases for regression packs is not a trivial exercise. There are three types of test suites executed during each release of a software application – regression tests, release specific tests, and defect fix verification tests. Careful thought and attention must accompany the selection of test sets for the regression pack. Some of the guidelines to select test cases for regression suite include –
(i) Include the test cases that have frequently yielded bugs
(ii) Include the test cases that verify core features of the application – use requirement traceability matrix.
(iii) Include the test cases for functionalities that have undergone recent changes.
(iv) Include all the integration test cases – Even if integration testing is a separate part of the software testing lifecycle, its test cases should be included in the regression test suite.
(v) Include all complex test cases – Some system functionality may only be accomplished by following a complex sequence of graphic user interface (GUI) events.
(vi) Prioritize the test cases for regression testing based on business impact, to reduce the efforts in regression testing –
- Priority 0: Sanity test cases delivering high project value.
- Priority 1: Essential functionalities for delivering high project value.
- Priority 2: System test cycle cases delivering moderate project value.
(vii) Categorize the selected test cases as Reusable/repetitive test cases, Obsolete test cases.
(vii) Choose the test cases on a “case-to-case” basis
(ix) Classify based on the risk exposure.
(5) Metrics and Dashboard to Track Regression Tests
With the amount of investment that goes in to run tests and manage the risks in software process, it is important to track functional and code coverage in regression.
QA functional coverage generally is measured through pass/fail status of tests (but that can at best be called test execution status). Another method is to measure percentage of feature sets tested.
Many of the test automation tools have a basic built-in dashboard feature to publish test status reports/dashboards, but most tools lack in ability to provide true coverage. The real coverage can only be determined through code coverage, but the same is done only through unit tests. Unit tests are granular and cannot take the place of functional tests, including regression, API, and end-to-end test. Because of incremental code changes – in functional, API and regression testing in CI/CD pipelines, functional test code coverage is the need of the hour.
While it is accepted that functional code coverage is a critical indicator of quality, there is no way to extract a finite coverage metric for all tests (including, unit, API, security, etc.) for a build.
SeaLights’ test metrics dashboard measures exact test coverage calculated across all test tools and environments.
Identifying regression test cases is critical and requires a complete knowledge on applications or products under test. Change impact analysis and a history of defects both play a major role in the identification of test cases for regression. A right combination of testers and business owners can bring value to identify regression test cases in an application’s or a product’s lifecycle. Regression test activity should be tracked through metrics and a regularly published dashboard. The dashboard should include progress on all tests – unit, API, Security, etc., across each build, to provide assurance of right level of regression coverage.
Renu Rajani is Vice President in Financial Services at Infosys Limited. She has nearly 28 years of experience in consulting and IT industry. Renu is author of two books on software testing published with Packt Publishers, Mcgrawhill Education, respectively.