Continuous integration has been around for ages and is the cornerstone of agile infrastructure. Everyone does CI (or says they do), and if you want to do Continuous Delivery, everyone will tell you, you must get your CI act together.
If continuous integration is a highway that enables developers to get to their destination in hours instead of days and months, testing is Interstate 5. Testing is exactly the opposite of what we want in a continuous delivery pipeline:
- We want to move fast – writing new tests takes a lot of time.
- We want to trigger a build on every commit – tests take too long to run.
- We want to release – tests break and we don’t know what the problem is.
Image Source
Undoubtedly, testing is one of the biggest bottlenecks and challenges in a CI system, and getting it right is crucial.
To help you prepare, we put together 5 common pitfalls of continuous integration testing. By understanding these pitfalls, and learning how to overcome them, you can remove the roadblock, or at least go from complete standstill on your CI highway, to just a minor slowdown.
1. Not Enough Tests
When release cycles are fast and furious, the classic pitfall is not having enough tests. QA teams are becoming a thing of the past, and developers are often asked to develop tests “as part of the sprint”, but without having additional resources to do so.
Testing today is supposedly “part of the definition of done” – you can’t finish a dev task unless you’ve tested it. But in reality, developers are measured according to the number of story points they complete, and not according to the depth or quality of testing code they build.
This can create a situation where parts of an application are not tested, or very shallowly tested, leading to faults that will only be discovered in production.
What to do about it: Testing takes time. Plan it into your sprint and resource allocation, and ensure developers tasked with testing actually have the skills, time and motivation to do it.
2. Too Many Tests
In organizations where quality is a strong driver, due to culture, management requirements or highly demanding customers, there could be a tendency to over-test. Massive resources might be invested in building agile continuous integration testing infrastructures, leading to gigantic test suites that take a long time to run and are infeasible to maintain.
Then an army of engineers is tasked with “optimizing the tests” – getting that test suite to run in just 20 minutes instead of 15 hours. Often, test optimization is not really the problem – the problem is that tests are bloated to begin with.
Any test that cannot be maintained over time with existing resources, is a drain on the system. It takes time to run, and while it may find bugs in the system, it will quickly lose its effectiveness and become a white elephant. Similarly, any test that cannot be run enough times a day given the team’s development velocity, with reasonable technical aids, could be a waste.
What to do about it: Try to prevent bloated test suites. Avoid building automated tests that you can’t maintain or can’t reasonably run within your CI cycle.
3. Automating the Wrong CI Tests
Zubin Irani, who heads a respected agile consultancy, suggests a five-step process to prioritize CI automation. He proposes asking, before you automate something, how complex or problematic the CI process really is, and considering if it is really worth automating.
The main idea here is that automation is expensive. If you invest in automation, you want it to provide maximum value for the agile team and end users. Investing in the wrong place can lead to ineffective automated tests, and also to bloated test suites, as noted above.
Based on Irani’s framework, here are a few pitfalls you should try to avoid in continuous integration testing:
- Automating tests for scenarios which are rarely used
- Automating tests which take a very short time to run (even if they are manual)
- Automating tests which do not involve multiple people, teams or systems and so are not causing a bottleneck in your pipeline
- Automating tests which are not error prone
- Automating tests which are not urgently needed for your CI process to function (in other words, it might be possible to skip them or solve them in other ways)
Automating any of these five types of tests will take you in the wrong direction – you’ll invest resources but won’t really make things easier, users happier, or release cycles faster.
What to do about it: Before you automate any test, consider if it belongs to one of the five categories above. If it does, find an earnest intern to do it – and automate something else!
4. Relying on Useless Test Metrics
Some test metrics are just plain useless – see the comprehensive post by Gilad David Maayan on DZone. A few examples:
- Number of test cases executed – can’t tell you what test cases are testing or if they are effective at all
- Number of bugs per tester – encourages inefficiencies such as discovery of meaningless bugs, and promotes an “every man for himself” mentality
- Percentage pass rate – easy to manipulate, for example, one long test could be broken into many small ones, artificially increasing pass rate.
- Unit test code coverage – does not take into account unit test quality, and also other types of testing, like integration and system testing, which are crucial.
- Percentage of automation – this sounds great but it doesn’t reveal the quality of automation. If automated tests are poorly designed, you’ll be worse off than in the manual days.
Many teams rely on these and similar continuous integration metrics to boast about the test automation within the CI cycle. Managers, stakeholders and the teams themselves should be wary of vanity metrics which can provide a false sense of security – until problems explode in production.
What to do about it: By all means use metrics, but combine metrics intelligently to provide a holistic picture of your CI testing. Be prepared to see problems and act on them.
5. Illusion of Test Coverage
Related to the previous point – many agile teams measure code coverage or the quantity of unit tests, and take this to be a measure of their “test coverage”. Some even aspire to 100% code coverage and believe, at least to some extent, that this equates to complete test coverage.
That couldn’t be further from the truth. Sure, code that is well covered by unit tests is likely to be of higher quality and probably has fewer errors or defects than code that isn’t. But this doesn’t mean that this code will play well with other systems and work well in production.
Most existing measures of test coverage do not include integration and system tests, which are the real litmus test of whether a system serves its business purpose. So code coverage can create an illusion of a thoroughly-tested system, while in reality, crucial user stories may not be tested at all.
What do about it: Find a more holistic measurement of test coverage. One way to do this is the SeaLights platform, which measures tests across all testing platforms and gives you a single, actionable measurement of your test coverage and quality.
Conclusion: The Right Tests and the Right Metrics to Keep Continuous Integration Moving
In this page we covered five things that can go wrong in continuous integration and turn your agile development superhighway into a huge traffic jam.
Here are the five pitfalls and what you can do about them:
- Having too few tests because of limited development resources → give developers the time and skills to write proper tests.
- Having too many tests which you can’t run or maintain → ensure you build tests that are feasible to maintain in the long term and can be easily run within the CI cycle.
- Automating the wrong tests → consider, before automating, which processes are really complex and holding back the release cycle.
- Relying on useless test metrics → don’t use vanity metrics to show off your tests. Build a holistic measurement that shows problems and take the time to fix them.
- Illusion of test coverage → don’t think that because you have 96% code coverage, your system is well tested. The proof of testing is in the pudding of integration and system tests, which many teams do not really have.