Agile Testing Strategies Across Four Lifecycle Stages
Many Agile teams are striving to validate their work at a high level, and are creating ways to bring testing techniques into their work practices as much as they can.
Testing in traditional development tends to involve a test plan. However, a heavily-documented test plan is not typical in agile development. Rather, agile testers demand flexibility and need to react to changes in requirements. So, there is a call for an agile test strategy instead of an agile test plan.
This article overviews the testing strategies which agile teams are applying in practice, putting them into the context of the lifecycle of agile software development.
In this page you will learn:
- What is agile testing?
- Agile testing strategies in sprint zero
- Agile testing strategies in construction iterations
- Agile release planning (transition phase)
- Production environment
- 8 agile testing best practices across the development lifecycle
This page is part of our series of articles on agile testing. See other articles on:
What is Agile Testing?
Agile testing is a key component of agile software development. In agile development, testing begins prior to the onset of development. This differs from previous software approaches, where testing was a stage that took place once development was finished. Agile testing is continuous testing that occurs in parallel to development work and provides a continuous feedback loop.
Another feature of agile testing is that testers no longer form a distinct unit (the “QA department” does not exist). The agile development team now includes testers. In some instances, agile organizations don’t have “testers” or “QA engineers”, rather all team members are involved in testing. In some instances, there are test specialists, however, they work together with developers in every part of the software development cycle.
Agile Testing Strategies in Sprint Zero
Sprint zero comes before the first development iteration. In sprint zero the team builds environments, creates a product backlog, ensures they have a release plan, and completes other tasks necessary for the project to begin.
Because sprint zero takes place prior to the onset of the project, there are no tests to perform. To take advantage of sprint zero the team should design a test strategy before doing anything else. They should also perform initial setup tasks, including installing testing tools, identifying the individuals responsible for testing, and scheduling resources such as the usability testing lab.
Agile Testing Strategies in Construction Iterations
The aim of the construction phase is to bring the system up to a stage where it is all set for pre-production testing. The team should now prioritize requirements and complete their specifications. They should also analyze requirements, create a solution to meet them, and code and test the software. If needed, early versions of the system should be released.
Most of the testing takes place during this phase. The team works according to prioritized requirements: with each iteration, they select the most essential requirements and implement them.
Construction iterations involve two types of testing:
- Confirmatory testing—focuses on verifying that the system meets the intent of the stakeholders.
- Investigative testing—isolates the problems that the confirmatory team skipped.
Agile Release Planning (Transition Phase)
In this phase, the system is nearing deployment to production. The team should train support, operations and end users. The team should also market the product release, finalize the system, run backup and restoration, and create user documentation.
Testers may conduct extensive testing during this phase, including beta testing. They may also fine-tune the product and rework it, attending to significant defects. The final testing phase includes acceptance testing and full system testing. The team should test the product rigorously in construction iterations, so the final testing stage can run smoothly.
Testers should perform endgame exploratory testing in an environment as close to production as possible, including servers akin to those in production and using a database with real or closely simulated data. This is particularly significant for a product that interacts with operating systems, environments or third-party products.
It is helpful to include individuals from outside the regular agile team. An outsider’s perspective is useful as regular testers can lose sight of the user’s point of view, because of their knowledge of the technical aspects of the product. Stakeholders and individuals who hold different positions in the company can get involved in testing at this stage.
The aim of the production phase is to ensure that the systems are useful and productive after they are deployed to the end-user.
This phase can look different in different organizations and systems. Shrink-wrapped software, for example, does not need operation support but may need a help desk to aid users. Systems used internally in an enterprise may need operational staff to monitor and run them. Whatever the scenario, the goal of the production stage is to keep the system running, ensure it is stable, and to assist users.
Daily use of the application in production, combined with targeted testing, can give developers confidence that the application is running smoothly. The team can create a daily sanity checklist for production testing that addresses all the core functionalities.
Testers can use tests like user acceptance testing, synthetic user testing, and disaster recovery testing in the production stage.
Establishing a Measurement of Quality to Guide Agile Testing
A new category of tools called Quality Intelligence Platforms is helping agile teams understand where to focus testing efforts in each stage of the product lifecycle – sprint zero, construction iterations, deployment and production.
SeaLights is a platform that collects data about test execution across functional testing, acceptance testing and non-functional testing, monitors code changes, and tracks usage of features in production. It creates a visualization of “testing gaps”: areas of the product which have recently changed, are used in production, but are not sufficiently tested.
Test gaps help agile teams immediately understand where to focus testing efforts. Instead of over-testing, or reacting to previous production faults, they can precisely target areas of the product which are at high risk of quality issues.
To learn more, read our white paper: Reactive Software Maintenance: The Silent Killer of Developer Productivity