Regression testing in agile helps development teams concentrate on new functionality, while maintaining stability with every new product increment. Teams use regression testing to make sure that tested software continues to perform after every modification. Regression testing is the “stepchild” of agile testing, loved by few, but is essential to enable the high velocity that agile teams strive to achieve.
In this page you will learn:
Regression Testing in an Agile Context: Basic Concepts
In agile, testing needs to develop with each sprint and testers must make sure that new changes do not affect existing functionality of the application. This is known as regression testing.
Regression testing ensures that previous functionality of the application works effectively and new changes have not introduced new bugs. Regression tests should be employed whether there is a small localized change to the software or a larger change. Teams must verify that new code does not conflict with older code, and that code that has not been changed is still working as expected.
In agile, there are frequent build cycles and continuous changes are added to the application. This makes regression testing in agile essential. For successful regression testing in agile, a testing team should build the regression suite from the onset of product development. They should continue building on it alongside development sprints.
Regression Testing Methods
There are three ways to undertake regression testing. The approach you select will vary according to your circumstances, the size of your codebase, the number of testers on your team, and your available resources.
- Re-test everything—involves rerunning all existing tests on the new codebase. If the tests were well designed, this will isolate regressions. However, this method is resource intensive and may not be feasible for a large codebase.
- Selective re-testing—it is sometimes possible to identify a subset of your existing tests that can address all or almost all of the “moving parts” of your codebase. It is then sufficient to re-run that selective set to discover regressions across the codebase.
- Prioritized re-testing—used on large codebases. The priority tests address code paths, user actions and areas of functionality expected to contain bugs. Once these tests have run you can complete the remaining tests.
Automated Regression Testing: Important Considerations
Regression testing requires constant repetition. Every release cycle needs to include regression testing to ensure new developments have not broken anything.
Not all regressions are the consequence of a new feature or routine bug fixes. They can also be the result of a new browser versions, database updates or other environment changes. Regression can also be a performance or security problem. When stable and repeatable regression cases are automated, manual testers can focus on testing various environments and merging complex cases.
Consider the following when you create a strategy for regression testing automation:
Is automation suitable for your project size?
Automated testing is efficient for large and medium-scale projects, especially when testing software with multiple sub-systems, for example web applications or multiuser games. For a small or short-term project, automation will not have a high return on investment and may not be worthwhile.
Manual tests are a starting point
If you try to write automated regression tests against a feature in development, you could waste time writing against a volatile feature. Thus, you should only conduct regression testings once you have run and passed a test manually at least once. Only then can you compare the results of the manual run and the automated test.
Don’t aim for 100% coverage
Regression testing scripts should cover 70-90% of manual tests which are effective and repeatable. 10-30% of test cases only isolate bugs once, or continuously report false negatives or positives, so they are inappropriate for regression testing. Achieving 100% regression test coverage is actually less effective and can waste test resources.
Regression Automation Challenges
Your team should know of obstacles that can set back automation efforts:
- Maintenance—automation regression tests suites are not valid indefinitely. A test suite should be quickly modified to reflect changes in the project. The test team should evaluate automated regression test suites to isolate obsolete test cases.
- False positives—tests that report a failure when the product has no issues. This may be caused by timing issues, obsolete test cases and other objective reasons. It may also be caused by poorly designed or poorly coded “flaky tests” that provide inconsistent results.
Why is Regression Testing Important in Agile Development?
In an agile framework, the team focuses on functionality planned for the sprint. However, while the team attends to a particular product area, they cannot be expected to take into account the risks their changes might pose to the entire system. A regression test will show areas affected by the team’s recent changes, across the codebase, assuming coverage is sufficient.
You should execute regression tests shortly after changes are made, preferably by automatically running regression tests as part of the build process. When feedback comes late, the team may already be implementing changes in other areas of the system.
Regression Testing Challenges Faced By Agile Teams
Several common challenges can make regression testing difficult for an agile team:
- Changes—management and customers sometimes make excessive changes to requirements. These changes can be so volatile that entire iterations are wiped out. This poses a serious risk to any test automation strategy.
- Cannot use record-and-playback testing tools—teams need to wait until functionality is ready to employ traditional, test-last tools with record-and-playback-features. Therefore, traditional automated functional testing tools don’t work in an agile context.
- Regression test growth—the scale of regression testing increases with each sprint, and in large projects regression tests quickly become unmanageable. To ensure regression testing remains manageable, your team should automate, but also review tests frequently and remove obsolete or ineffective tests.
- Lack of communication—effective communication should exist between the automation testing team, business analysts, developers, and stakeholders. This ensures a good common understanding of changes to the product—which features are new and require new regression tests, which ones are undergoing changes and should be closely tested, and which ones are removed or deprecated and no longer need regression testing.
- Special testing skills—as the project develops, specialist skills will be needed for test areas such as integration and performance testing. The team should leverage test specialists, either within the agile team or from other parts of the organization, to gather and plan testing requirements.
- Test case maintenance—the more test cases you automate, the more you can verify quality of existing functionality. However, more automated test cases mean more maintenance. If you make your test cases too loosely coupled to product functionality, they may pass even when issues exist. However, if they are too rigid they will need to be rewritten and updated with every small change to the system.
Building a Regression Testing Strategy for Agile Teams
Before building a regression testing strategy:
- Gather all test cases you intend to execute
- Identify improvements that can be made to the test cases
- Estimate the time for the execution of the test cases
- Outline what can be automated and how
Building a regression testing strategy:
Using smoke and sanity test cases
Smoke and sanity testing come before regression tests and can save time for testing teams. Sanity testing is a run-through of the basic functionality of an application, prior to the additional testing of a new release, which informally confirms that functionality works as planned.
To conduct smoke testing you need a subset of test cases which test primary and core product workflows, such as startup and login, and can run very quickly.
You can use sanity tests and smoke tests to quickly assess if an application is too flawed to warrant any further testing, such as regression testing. This is much better than running regression tests on a product that doesn’t load or enable login, and start investigating why hundreds or thousands of regression tests are failing.
Finding error-prone areas
Include the test cases that fail most often. Some areas in the application are so error-prone that they can fail after a minor coding modification. You can keep track of these failing test cases during the product cycle and include them in the regression test suite.
Test case prioritization
In a risk-based approach, a testing team selects test cases that cover the application areas most affected by changes in the project. They also rank them according to priority. Regression tests focus on product areas with the greatest perceived risk of quality issues.
Prioritize the test cases according to critical and frequently used functionalities. When you select test cases based on their priority you can reduce the regression test suite, save maintenance time and make it possible to run regression takes faster and more frequently.
Analyzing bug reports
Some regression testing tools integrate with error tracking tools. This allows you to see rich information about what happened during a regression test – if it failed, what failed and exactly which line of code was affected. Error tracking tools can also help you obtain screenshots and other metrics about failures during regression testing, helping identify and debug the issue.
Testers should communicate with product owners to monitor changes in requirements and asses them. They should communicate with developers to understand which so they know what changes were made during an iteration.
A New Approach for Regression Test Prioritization
Agile teams move fast, and regression test suites can become a major burden. In large projects prioritizing regression tests is a must, but today teams are forced to prioritize based on “tribal knowledge” of product areas that are error-prone, anecdotal evidence from production faults, and imperfect metrics like unit test code coverage and defect density.
A new class of tools called Quality Intelligence Platforms are providing a more scientific approach to test prioritization. For example, SeaLights is a platform that integrates with your regression testing tools, as well as other tools you use for unit, functional, integration and acceptance tests, and collects data about test execution, test coverage and how frequently features are actually used in production.
Based on this data SeaLights gives you a clear measurement of quality risks—which areas of the product are the most likely to have errors that affect your users. By focusing on those areas, you can save a large percentage of regression tests and improve sprint velocity.
Read our white paper to learn what your regression tests do not cover, remove unnecessary tests and add the regression tests that really matter.