Test Quality in CI/CD – Expert Roundup

Home/Blog/Test Quality in CI/CD – Expert Roundup

Test Quality in CI/CD – Expert Roundup

Bar KofmanDigital Marketing Manager | December 19, 2017

Software is everywhere! Gaining a competitive edge is based on the ability to develop and release software quickly. Adoption of CI/CD addresses exactly this with both technology-oriented change and process and organizational change.
Technologies like CI/CD, microservices, feature toggles, test automation and processes such as shifting left, shortening the feedback loop, fail fast, distributed and autonomous teams, no-QA CoE enable to move faster. The main capabilities teams should adopt to overcome the bottleneck are:

  1. Development of automated tests across the entire application beyond unit tests. The selected test frameworks and automated tools must provide stable tests, that are not easily broken.
  2. Visibility and insight on what code and features are actually being tested. Pushing untested code towards production is a sure path to decrease in speed, as issues in production are the number one reason for holding teams back from releasing software quickly.
  3. Optimizing regression test suite, for maximum coverage while shortest execution time.
  4. Consolidated view of all quality activities across different teams, environments, and technologies.
  5. Ability to prioritize quality activities based on data to mitigate risk when tradeoffs are required.

Adopting the right set of tools, processes, and mindset is the path to maintain high quality while increasing speed in a CI/CD environment.

Avishai Shafir
Contributed by Avishai Shafir, SeaLights


What should CD testing look like? If you ask me, it all comes down to a properly integrated test strategy.
That strategy is having both developers and testers know who tests what so that there are no gaping holes in the overall test coverage (to prevent quality leakage) and no double work is done (to prevent unnecessary wasting of time). Only once you have a clear idea on who tests what in the CD process, then you can decide whether or not to automate the accompanying tests and on what level (unit, integration, end-to-end). Creating a well-functioning integrated test strategy requires adaptation from both testers and developers:

  • Developers should become even more aware of the importance of testing and start looking beyond plain happy-flow testing in their unit and integration tests. This removes a lot of potential defects that are otherwise detected later on in the process, if at all… If you’re a developer, please do read this article on Simple Programmer to see what I’m trying to get at.
  • Testers should start getting closer to developers, partly to better understand what they are doing, partly to help them with refining and improving their testing skills. This might require you to get comfortable reading and reviewing code, and if you’re into test automation or willing to become so, to start learning to write some code yourself.

Link to full article: https://www.ontestautomation.com/first-steps-into-testing-in-a-continuous-delivery-setting/

Bar Kofman
Contributed by Bas Dijkstra, OnTestAutomation ,


A year or so ago, there was a tweet that appeared on my timeline multiple times across many weeks…or maybe it was months…I’ve slept since then. Clearly, my memory fails me on the specifics. What I do remember is that the question posed by the tweet was something like the following: is automation sufficient without a strategy? When discussing strategy, I like to ask two broad questions:

  • What do we want to accomplish?
  • How do we plan to accomplish it?

The “Whats“—these are our goals. Every organization has business goals. Clearly, product development must be in line with the goals, meaning testing goals must be in line with them as well. For instance, are we:

  • trying to reduce defects in a product?
  • trying to release faster or more often?
  • adding support for a new operating system to a product?
  • trying to gain market share?

Our testing strategy must be consistent with these goals, thus our automation goals must be as well. If they are not, we’ll just be doing the wrong thing faster. Having a good relationship with our extended audience, in this case, “the business”, can help us have a positive impact on achieving these goals. The “Hows“—these are our methods.

  • Do we need a small, fast, shallow smoke suite of tests that are executed on each deploy to each environment?
  • Are there some areas of our product that are difficult to test?
  • Could humans benefit from applying technology to these problems?

Regardless of the methods we choose, we must be sure they are in line with our goals. It’s important to note that the thoughts expressed above are not automation-focused; they focus on business strategies and testing. This focus is intentionally away from automation specifics to help ensure that automation is supporting our goals; automating is not a goal, it’s only a means to an end. So, back to the question of “is automation sufficient without a strategy?”; clearly, my opinion is “no, it is certainly
not sufficient”. Link to the full article: https://responsibleautomation.wordpress.com/2017/01/11/your-first-move-automation-strategy/

Bar Kofman
Contributed by Paul Grizzaffi, Principal Automation Architect, Magenic. Author of Responsible Automation,


Joe: How does testing fit into a continuous delivery? Is it an important piece that you need to test before a build is promoted to the next stage and before it’s delivered to users? How does that process work?

Sahas: Testing/the whole quality is absolutely an important aspect. I strongly believe in having a high-quality product, and I strongly believe in some of the practices associated with Agile quality.  The world has moved on. We were shipping products once every six months, maybe, once a year, once every two years. Now, everybody is thinking about shipping the product in a reduced cycle time. Ship it once in two weeks, ship it once in four weeks, ship it once in a month kind of thing.

Everything changed around that. The business needs changing, so now, it’s not only the development that has to shift gears towards Agile. Everything has to shift gears towards delivering that product, which means quality aspects also have to change. At a high level, I’m a big believer in Agile quality. How would we fit,
in a quality mindset, into the product-building lifecycle.

Regardless of what you do, you have to build quality in. You shouldn’t be thinking about quality as an after-the-fact, which unfortunately has been the way we’ve worked so far. We worked first to get the requirements out and we designed, I did some coding, and finally, we will say, “Okay. I’m ready to bring in some testers. Let’s start doing testing.” If we want to deliver product continuously, we have to continuously evaluate the quality at every stage. Testing throughout the cycle is a critical part of building quality, and fitting quality into continuous delivery.

We need to engage the quality evangelists, who think from a quality perspective, to test our understanding. The process goes like this: here is the feedback that we got from the customer and here’s what we built. We ask did we build what he wanted, instead of taking the tester directly into a screen and saying, “here’s my screen. Go and test its functionality.” Instead of checking the functionality, we need to check our understanding: this is the second aspect. Did we understand the problem correctly? Did we build the right solution, the right thing, the right way?

The third aspect we focus on is finding bugs. However, in a shorter cycle if I’m finding a bug, how likely is it that I would go and fix the bug before it goes into my customer’s hand? Instead of finding a bug, pair with a developer, be a part of the whole thing. Pair with your product owner. Try to understand what your customer really wanted, and focus on preventing bugs. Employ techniques, for example, check with the developer while he’s designing, while he’s coding, and try to influence the developer to think like you from a quality perspective. Hence, to cover more areas while the developer is on the line of the code.

In some places, quality used to be a separate organization. We might create a quality department or outsource it somewhere to some country, and QAs and testers and test automation experts would test the quality. We have to move away from that, though, and the whole team has to open to quality. As the developer, I write the code. But I wouldn’t call to the other side of the window and say, “Hey, QA, go and test for me.”

We, as a team, have to own quality. We as a team have to, for example, think about quality aspects while we’re thinking about design aspects. We also have to think about what is the expected load on this particular service? How do I verify that particular character that my product
owner wants? You’ve got to go back to a product owner and speak with them, “You’re asking me to build this new service. What if this comes up in the Super Bowl ad? What if I get 30,000 hits in three seconds? How do you expect the service to react?”. It has to be more collaborative. The whole team owning quality in order to deliver what would delight our customer: that’s the fourth aspect.

The fifth aspect is not automating absolutely everything. Instead, of automating everything, I believe you cannot really get rid of manual verification. You cannot really clone your brain, the way your brain thinks, the way the product owner would see the product, the way your customer, the way millions of customers would see the product. Rather, you should use automation for its strengths. This means automating in a more rationalized fashion.

We have to think more in-depth. As a team, why do we want to repeat something if that has been automated at the unit test level. If something has to be automated at the unit test level, why should we automate at the GUI level? If something can be automated at the web service level, why would you automate that at the GUI level? We have to take some sort of rationalized approach even to automate things. That’s the fifth aspect. I call it rationalized automation.

I would say, wherever I go, wherever I try to implement automation, wherever I try to implement quality, we would bank on these five things.

Link to the full podcast/transcript: https://www.joecolantonio.com/2015/10/15/episode-73-continuous-delivery-automation-podcast/

Bar Kofman
Contributed by Joe Colantonio, founder, TestTalks. Author of the UFT API Testing Manifesto (interviewing Sahaswaranamam Subramanian) ,


Many applications start out simple. While trying to find a market fit and iterate around the product, the code base is still small and full of hacks that are used just to ensure things work. The tests are still narrow and on point but not exactly tuned for speed. After all, the overall build is still in the realm of minutes.

Once the application and product start to gain traction, things get more complex. The test suites start accumulating cruft. Builds start taking ten, twenty, or thirty minutes. After a couple of years, it takes hours to run a full build. At this point, there’s a unit test suite, integration tests, and user interface tests. Together they take significant time to run, slowly but surely reducing development velocity and seriously hampering a team’s ability to ship fast.

Test complexity is a silent productivity killer that’s hard to pinpoint when it reaches a critical mass. The increasing test complexity impacts velocity and makes the overall build more brittle and prone to breaking due to only small changes.

Test quality takes on a different meaning at this point. The tests themselves might be great, the code base well covered, but the sheer complexity starts getting in the way. At that point, teams will find their ability to continuously deliver value to the customer is hampered.

It seems inevitable that a team reaches this point when they strive to maintain good test coverage while also trying to avoid the increased complexity of microservices. Yet microservices also offer an answer to the problem these teams are now facing. They allow breaking out specific functionality into separate applications with separate test suites. Smaller pieces allow for test suites to stay small, which brings back the velocity that the team was used to in the early days

Microservices come at the price of increasing complexity in managing them at runtime as well as making sure that the services, and the teams building them, communicate well with each other. But for the ability to ship faster that complexity can be a worthy trade-off. It helps to avoid one of the biggest test quality issues with continuous integration delivery: a complex test suite and a slow build.

Bar Kofman
Contributed by Mathias Meyer, CEO, Travis CI,


When you work on free software, what you get is a wild diversity of tools, preferences, languages, personalities. Testing in Ubuntu is about combining all those into a consistent, happy family, to build an operating system that is at the same time usable, useful, cool and transparent for our community. The challenge is enormous, but the opportunities that this diverse and distributed community present are even bigger.

The role of our community of upstream developers is to develop software giving their best efforts. The role of our users is not to remain passive, but to start owning their technology by contributing with their feedback, their ideas, requirements and maybe even their time and abilities. Our role is to make sure that the process for developers to get a new stable version published and ready to install is super simple, from commit to millions of users in a snap. We try very hard to make this enjoyable and fun for everybody, including us.

After 10 years of developing Ubuntu, learning from our community and adjusting to the new projects that appear every day, what we now know is that to release with high quality we all need to be flexible. This means adapting ourselves to the processes of the projects that build the different parts of our system, injecting ourselves in those projects with a very low-maintenance method that will get the project compiled, packaged and installable, and helping those projects complete automated user acceptance testing and crowd testing with our community of early adopters.

We now have build.snapcraft.io, a secure build farm that will build software and put it in the Ubuntu store, in the edge channel, every time that a new commit lands to the master branch. In this channel, it’s not visible to all the users, only to the early adopters who want to contribute testing. From here, each project can choose any path they want to assure the quality of their software, usually with a combination of automated and manual tests, running sometimes in isolation, sometimes in integration with the other projects that are following the same journey to stable. Once the testing is done, the project is moved to stable where it is visible to millions of users. We encourage the developers to be welcoming to new contributors, and we encourage our community to help us make an amazing, free and open operating system.

Leo Arias
Contributed by Leo Arias, snapcraft developer at Ubuntu,


Don’t leave failing tests unattended—everyone knows that tests can fail. Sometimes, fixing a particular test is a hard job and we leave it failing (but promise to ourselves that we’ll fix it later). The result is often the same: configuration becomes red and everyone starts to ignore any new failed tests in this configuration, which ultimately eliminates the effect of your tests. So, your tests are run, but the result is generally ignored.

To solve the problem, there must be a certain level of discipline in the team. If a test is failing and it’s temporary okay, you can skip these tests (i.e. throw SkipException in a code) or have the tool to mute the test, so it doesn’t impact the real situation. Don’t forget to leave a proper comment.

Another important thing is to review builds from time to time to locate tests that started to fail. Usually, a single build doesn’t contain lots of changes and it’s possible to identify who broke the test or the whole build. It’s a good practice to review builds with your changes to identify whether you’ve broken any test.

CI/CD tools must provide a way to notify that a build has failed. In the simplest case, it’s just notification via email or IM. A more advanced case is when everyone can see that someone is taking care of the failing test or build (for example, investigations in TeamCity).

So, the bottom line is: without a certain level of discipline, the efficiency of tests significantly drops. CI/CD tools can help to locate the reason and provide current status to others, but it cannot fix tests for you.

Sergey Pak
Contributed by Sergey Pak, Developer, JetBrains


The CI/CD pipeline needs to incorporate functional requirements, performance, and security, with valid test results and reports.  As the software development world has moved towards using various sophisticated methodologies, CI/CD has become an important part of development.

Testing is added to the CI/CD pipeline to ensure that the software delivered is stable and 99% bug-free. Reporting of issues that occur during testing in CI/CD is key. The main reason to adhere to the CI/CD approach is to save time and have the whole deployment process automated. Most of the CI/CD processes tend to happen at night and we need to have full reports on how the tests have run, which tests failed, and why they failed.

CI/CD should definitely include performance and security testing even though such tests don’t need to run as frequently as the functional tests. However, performance and security testing should be a part of the pipeline that runs roughly once every two sprints. JMeter/Blazemeter are two good performance testing tools, which can both be useful. The JMeter “Performance Plugin” provided by Jenkins can help to understand performance trends. JMeter also provides sophisticated reports which base performance tests on the specifications inputted by you that detail how well you expect the system to work. These reports can give you a detailed look and feel of the performance of the system.

JMeter 3 now works with Selenium Grid to help understand how the software performs when accessed by various systems from various browsers. With the help of Maven and Jenkins, these tests can now be part of your Continuous Deployment, and the end reports also include APDEX (Application Performance Index). With the help of Selenium Grid, JMeter can achieve real results of how the system works when accessed by various different browsers concurrently. My talk will run you through on how these challenges can be achieved from the very beginning so that you can avoid unexpected load clashes when hitting the brim.

When talking about functional testing, obtaining screenshots and screencasts are very helpful to visualize when your tests have failed in the CI/CD pipeline. Using Extent Report with the help of screenshots and screencasts can help you gather important information and understand the failure and successes of the systems.

Security can be added to your CI/CD pipeline with the help of OWSAP’s own ZAP API at a very early stage so that you can find the security glitches in your system and fix them as early as possible.

CI/CD provide a vast set of practices and processes that you can use to get the most out of software testing—whether functional, UI testing, API testing, performance testing, or security testing, all can be a part of your CI/CD pipeline to help release reliable, solid software.

Christina Thalayasingam
Contributed by Christina Thalayasingam, Zaizi,


Agile methodologies teach the breaking down of software development into smaller tasks known as “User Stories”. This enables early feedback which is useful to align software features with market needs.

With the widespread adoption of agile practices, teams are able to deliver functional software in smaller iterations.

Continuous Integration (CI) is the practice of checking in code regularly. Each feature is integrated and tested several times a day from a shared codebase. Though it gave a push for many smaller and frequent releases, test deployment and releases became strained, which ultimately affected the end goal.

Jez Humble’s breakthrough book, Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, talks about treating the entire software lifecycle as a single process—and one that could be automated. While agile addressed the needs of developers, an investment in DevOps initiatives and Continuous Delivery offers businesses much higher returns.

How to do automation right?

When selecting the right processes to automate, always ask yourself, “Does this need to be automated now?” The following checklist will give you the answer:

  1. How frequently is the process or scenario repeated?
  2. How long is the process?
  3. What people and resource dependencies are involved in the process? Are they causing delays in CI/CD?
  4. Is the process error-prone if it is not automated?

CI/CD is not just about the tools! If you’re looking at the tools without thinking about the people, processes, and the company structure, your CI/CD is never going to succeed. Implement a process using the tools, not the other way around. It’s crucial to understand the process and combined requirements of the organization and then choose the right set of tools to fulfill technical requirements.

Seamless coordination between CI and CD

CI feeds CD. The toughest aspect of CI/CD is the human factor involving collaboration between various teams – development team, quality assurance team, operations team, scrum masters etc. Collaboration and communication cannot be automated. To measure the level of coordination, benchmark your CI/CD processes against the best in the business.

Keep the goal in sight.

Design a meaningful dashboard by assessing what data everyone wants and establish a standard narrative for what that data means. Do not obsess over substance at the expense of appearance. Progressive assessment is important before metrics and dashboards. CI/CD is ultimately essential because it meets business goals. Failed releases lead to customer dissatisfaction and lower revenues.

CI/CD is not possible without continuous testing. In order to build quality software, we need to adopt different types of tests—both manual and automated—continually throughout the delivery process. CloudQA automates your regression testing, cross-browser testing, load testing, continuous monitoring, and seamlessly fits into your CI/CD pipeline by providing out of the box integration with CI/CD tools like Jenkins, CircleCI, TravisCI, JIRA etc.”

Arun Kulkarni
Contributed by Arun Kulkarni, CloudQA,


Up until the last few years, Agile really stopped at the custom app development team’s door. But now organizations realize that they need to apply the same principles to their mission-critical enterprise applications that run the majority of their business (e.g. ERP, Finance, Supply Chain, HR, etc). Organizations are looking at how to make Agile and DevOps work across this bigger landscape.

CI/CD systems require effective automation that can be run on-demand for continuous testing as part of the overall continuous delivery process. This is much more challenging in the enterprise application space where you are looking at end-to-end tests that can have 1,000s of steps and traverse multiple systems (SAP, Salesforce.com, Outlook, etc.). A 2017-18 World Quality report cites that just 16 percent of end-to-end business scenarios are executed with test tools. The majority of end-to-end tests are still being done manually and many times by business users/domain experts.

This brings up another core challenge when it comes to implementing CI/CD across enterprise applications. Transferring the knowledge needed from domain experts to automation specialists can take months. The more companies try to work around these legacy systems by developing customized surrounding apps, the more the problem compounds. Customized apps still need to be tested against the supporting business processes, and all gains achieved on the custom dev side from the Agile process are quickly lost.

The challenges of end-to-end business process testing don’t end with the availability of automated tests. Running tests as part of a CI cycle also presents a number of challenges when it comes to scheduling. Unlike custom apps for which virtual DevTest environments that can be spun up in minutes, tests for systems like SAP need to be run many times in pre-production where there is limited availability. Executing UI-driven tests at scale also requires an active user session which means installing local agents and logging into machines. All this adds to the complexity.

Lastly, UIs are also evolving. SAP Fiori is a great example of where tools like Selenium fall short. Being able to create a continuous testing strategy that includes UI and end-to-end business tests is critical to delivering a CI/CD strategy for mission-critical enterprise applications. Worksoft understands this better than anyone. For more on how to build automated tests for mission-critical processes go to https://www.worksoft.com/products/worksoft-certify.

Shoeb Javed
Contributed by Shoeb Javed, CTO, Worksoft,


Software testing is usually performed as an isolated and independent activity confined within an allocated time span and set of resources. Once the development is done, testers are expected to find any issues and report them back for fixes. This process might have worked decades ago. However, in the internet age, the landscape is much different.

Application complexity and size coupled with the pace of change makes the traditional method of testing not only inefficient but highly ineffective for delivering a quality product. Particularly with larger applications, it is impossible for testing to be performed in a confined time and space with an expectation of decent coverage.

If testing cannot be done just during the testing phase, how else can it be done? Quality was always supposed to be considered as not a specific group’s problem, but something every person related to the product should be responsible for. In theory, it sounds great but the practical implications of this idea were very limited.

Continuous testing is just that. Instead of confining the testing process to a group and time, it’s done by everyone and all the time. In some way, every person is involved in fueling the testing efforts directly with tangible and measurable results, making quality everyone’s daily focus. Sales, marketing, support, development and product management – all contribute in providing market insights, customer feedback on what’s working, and technical help – together creating a continuous testing cycle and great feedback loop to continuously test, measure and refine the product’s quality efforts.

Ali Khalid
Contributed by Ali Khalid,  Quality Spectrum LLC,


Test automation is an essential part of CI/CD, but it must be extremely robust. Unfortunately, tests running in live environments (integration and end-to-end) often suffer rare but pesky “interruptions” that, if unhandled, will cause tests to fail.

These interruptions could be network blips, web pages not fully loaded, temporarily downed services, or any environment issues unrelated to product bugs. Interruptive failures are problematic because they (a) are intermittent and thus difficult to pinpoint, (b) waste engineering time, (c) potentially hide real failures, and (d) cast doubt over process/product quality.

CI/CD magnifies even the rarest issues. If an interruption has only a 1% chance of happening during a test, then considering binomial probabilities, there is a 63% chance it will happen after 100 tests, and a 99% chance it will happen after 500 tests. Keep in mind that it is not uncommon for thousands of tests to run daily in CI – Google Guava had over 286K tests back in July 2012!

It is impossible to completely avoid interruptions – they will happen. Therefore, it is imperative to handle interruptions at multiple layers:

  1. Secure the platform upon which the tests run. Make sure system performance is healthy and those network connections are stable.
  2. Add failover logic to the automated tests. Any time an interruption happens, catch it as close to its source as possible, pause briefly, and retry the operation(s). Do not catch any type of error: pinpoint specific interruption signatures to avoid false positives. Build failover logic into the framework rather than implementing it for one-off cases. Aspect-oriented programming can help here tremendously. Repeating failed tests in their entirety also works and may be easier to implement but takes much more time to run.
  3. Third, log any interruptions and recovery attempts as warnings. Do not neglect to report them because they could indicate legitimate problems, especially if patterns appear. It may be difficult to differentiate interruptions from legitimate bugs. Or, certain retry attempts might take too long to be practical. When in doubt, just fail the test – that’s the safer approach.

Andrew Knight
Contributed By Andrew Knight, LexisNexis,


A couple of years ago I worked on a big product where we used Specification by Example (Behaviour Driven Development) extensively. There were 5 developers on the team, a product owner, and a business analyst. We worked for several years on the product. We started to use Specification by Example to solve communication problems between the product owner and the developers – we needed a way to bridge the communication gap, and Specification by Example proved very effective.

After a while, we began to automate our scenarios, creating an executable specification. We added a Living Documentation (that’s what started my involvement with Pickles, the open source Living Documentation generator) and integrated the results of the automated test runs into that Living Documentation. We had a pretty cool automated build with virtual machines where we deployed the software and ran our battery of automated scenarios. Productivity reached an all-time high. The number of user stories that were rejected by the product owner at the end of the iteration became zero.

Gradually, problems started to appear in our setup. We simply had too many scenarios: we began to focus on quantity of scenarios, not on the quality. The scenarios became more technical and less easy to read, so they lost their power to explain the workings of the system. The scenarios took a long time to run, so the running time for the whole suite increased to several hours. Due to timeouts, on average, 0.5 percent of the scenarios might fail – but we had 400 scenarios so there was a failure in every run. The value of our automated verification setup decreased severely.

What I learned from this: when doing automated scenario verification, focus on quality and not on quantity. If you want lots of tests, write good unit tests that run in the blink of an eye. But for your integration tests, or end-to-end tests, or scenario verifications, pick a small set of important scenarios and make sure they run reliably and reasonably fast. That way you will get the most value from those tests in the long run.

Dirk Rombauts
Contributed by Dirk Rombauts, Pickles,
*The open source Living Documentation generator


3 Barriers to Continuous Delivery

Continuous delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. Sounds pretty great, right? New features can be added quickly, and there is always a usable release ready to go. So why are only 65% of organizations implementing continuous delivery? Studies show that the barriers for companies moving to implement continuous delivery are integrating automation technologies, people, and collaboration platforms.

Integrating automation technologies

Change always comes with growing pains. Integrating new technologies with legacy software can be challenging, but getting to continuous delivery, releasing better software faster, is worth the challenge.

People

The race for IT talent is fierce, and finding the right fit for your team is a challenge. Continuous delivery is cutting-edge technology and there’s naturally a limited number of people with experience.

Collaboration Platforms

Because continuous delivery aims to streamline the development process, it means your collaboration platform needs to be able to keep up with the flow of information between teams.

These barriers look pretty intimidating when you begin the process of continuous delivery, but having the people, process, and tools to help you release the best software for your customers is the key to success.

Sarah Burns
Contributed by Sarah Burns, Marketing Specialist, Celtic Testing Experts,


CI/CD is like a chain, and testing is one of its essential links. There are many types of testing and one of them is performance testing. If we want to be sure that our application meets its SLA requirements, we have to run performance tests with every build. A good practice is to automate these tests and put them into our CI pipeline. A good performance testing tool, dedicated environment, and trending reports are crucial elements in this process.

Thanks to the cloud, container technologies, and virtualization it is quite simple to prepare a testing environment even before each test run. The tricky part, however, is to integrate the test tool with our CI tool to easily run load tests. We need to run test scripts, collect data, and lastly, display results. Due to differences between test and production environments, we usually do not focus on absolute numbers but rather relative comparisons. Choosing the right load test tool could save us a lot of work.

At SmartMeter.io we are aware of this. Therefore, reporting in SmartMeter.io is as simple as possible (there are literally “one-click” reports). Reports also contain Trends analysis in clear graphs and tables, not dependent on any plug-in or CI tool, which makes it possible to use our favorite tool or a tool preferred by our client. If we want to be sure that the metrics of our application meet our business SLA, we can use acceptance criteria provided as a core component of SmartMeter.io. Every report tells us which criteria passed or failed. Any failed criterion marks the whole test run as failed, so any CI could recognize that load tests failed. There is no need to check every report. Rather, you can focus your work on the things that matter.

Martin Krutak
Contributed by Martin Krutak, SmartMeter.io


The essential component required to enable Continuous Integration/Continuous Delivery is automated testing. This is, of course, a correct statement, but only in part. Yes, automated testing is an essential component for enabling Continuous Integration/Continuous Delivery.  However, the problem with automated testing is that no matter how good the automated testing solution is, the creation of the automated testing scripts or scenarios will always lag behind the development work having been completed.

Often this is because the automated testing scripts cannot actually be written until the development work is completed i.e. the field or button on the screen needs to be added to the screen before an automated test can be written to test it. Because of this delay, automated testing lags behind development by as much as one, two, or even more sprints.  In the worst case, due to project schedule pressures, automated testing is abandoned for a period of time to meet a milestone or deadline, with the idea that it will be reinstated for the next phase of delivery.  Of course, all this does is build up a bigger automated testing debt, which has to be paid off in the next phase.

Testing needs to lead development, not lag behind it, for us to remove that lag and debt build-up. Enter Test Driven Development (TDD) or Acceptance Driven Development (ADD).  With these approaches, the requirements are written in the form of the automated tests that will be used to test the system and deem it acceptable.  Developers make changes to the system based on these definitions and acceptance criteria.

Once the automated tests and regression tests pass, the developer knows his work is completed and the system can be delivered immediately and continuously.  There is no lag at all between development and testing because automated testing scripts are written as part of the requirements definition process. The biggest change we need to make to enable Continuous Integration/Continuous Delivery is for testing to lead development, not lag behind it.

With this in mind, we can say that the essential elements required to enable Continuous Integration/Continuous Delivery are automated testing and a Test or Acceptance Driven development and testing approach.  Only when these two components are used in combination can the dream of Continuous Integration and Continuous Delivery become a reality.  Visionary companies in this space like AutotestPro offer solutions which combine TDD and automated testing so that testing does not lag behind development; instead, testing leads the development.

Paul Chorley
Contributed by Paul Chorley, Co-Founder and Managing Director, AutotestPro Ltd,


DevOps testing is the portion of the DevOps Pipeline that is responsible for the continuous assessment of incremental change. DevOps test engineers are the team members accountable for testing in a DevOps environment.  This role can be played by anyone in the DevOps environment such as QA personnel, developers, or infrastructure and security engineers for their respective areas. The DevOps test engineer can be anyone who is trusted to do the testing.

In a non-DevOps environment, independent testers from QA team test the products passed on to them by developers, and QA then passes the product on to operations. In a DevOps environment, there is a need to pass on an error-free code in small chunks (e.g. microservices). This means that there is a need for testing more frequently throughout the process, end-to-end across the development and deployment cycles. The time for each increment in DevOps is very short.

The combination of short increments and the spreading of tests across the end-to-end pipeline requires fast, automated tests and end-to-end test results coordination. Everyone in IT needs to learn to test, both manual and automated and know how to read the test results.

Testing maturity is a key differentiator of DevOps maturity:

  • Many organizations automate integrations, builds, and delivery processes but have trouble with the subtleness of test orchestration and automation
  • There is a vital role for testing architects and testing teams to offer their expertise in test design, test automation, and test case development with DevOps
  • Whether the organization is using a test-driven development methodology, behavior-based test creation, or model-based testing, testing is a vital part of the overall DevOps process — not only to verify code changes work and integrate well — but to ensure the changes do not mess up the product
  • Testing is an integral part of product development and delivery

There needs to be constantly testing so that error-free code can be merged into the main trunk and we can get a deployable code from the CD/CI. This needs people to plan for the environment, choose the right tools and design the orchestration to suit the need.

Effective DevOps testing requires Development, QA and IT Operations teams to harmonize their cultures into a common collaborative culture focused on common goals. The culture requires leaders to sponsor, reinforce, and reward collaborative team behaviors and invest in the training, infrastructures, and tools needed for effective DevOps testing

A DevOps testing strategy has the following components:

  • DevOps testing is integrated into the DevOps infrastructure
  • DevOps testing emphasizes orchestration of the test environment
  • DevOps tests are automated as much as possible
  • DevOps testing goal is to accelerate test activities as early in the pipeline as possible

The 5 Tenets of DevOps Testing are:

  • Shift Left
  • Fail Often
  • Relevance
  • Test Fast
  • Fail Early

The test strategy requires also to design the application in a loosely coupled architecture. It is very important to have a good design before moving onto the automation. Test result analysis is also another key activity to be performed to ensure that proper testing takes place with the right coverage.

Some examples of open source DevOps testing frameworks are Jenkins and Robot.

Examples of a commercially licensed testing framework are Cloudbees, Electric Cloud, and Team City. For further detailed learning, the DevOps Test Engineer Course is recommended.

Niladri Choudhuri
Contributed by Niladri Choudhuri, Xellentro,


Our approach to continuous integration testing for Fareboom Mobile relied on four types of tools: Git (alternatively – Bitbucket) for code tracking and collaboration, TeamCity (alternatively – Jenkins) for test management, HockeyApp for build distribution, and finally – Slack, integrated with the TeamCity and HockeyApp, for notifications and troubleshooting.

There are numerous great instruments, and the key is to find the ones that work best for you. For instance, we prefer TeamCity because it allows our QA engineers separately test required branches of code.  It’s important to remember that the numerous benefits of the CI/CD approach can conceal some threats you should be aware of. However, you can get around these threats if you’re prepared. Thus, our recommendations for testing in the CI/CD process would be:

  • Don’t strive for the highest code coverage, since you don’t know if the test was performed on the relevant functions
  • Adopt the practice of test-driven (test-first) development, where tests are written before the code. This allows you to prevent situations where the test is engineered around possible bugs
  • Conduct research before introducing any tools
  • Establish process management to ensure the team is following strict guidelines

Niladri Choudhuri
Contributed by Alexey Balykov, AltexSoft,

*AltexSoft is a technology consulting company with expertise in software engineering, data science, and UX design.


If you’ve been working as a tester for any length of time, you can’t have failed to notice the shift

towards CI/CD in many projects and organizations. Businesses, projects, and operations teams all want to try and take advantage of at least some of the perceived benefits of being able to quickly and consistently release new builds to production, at the push of a button. In the meantime, testers will likely have found that the CI/CD model has a big impact on how they need to approach testing.

Most of the CI/CD pipeline has development, QA, staging, and production environments where certain tests are run to ensure that the code which has been written is safe to push ahead. An automated test is the most important part of any CI/CD pipeline. Without proper automated tests that run fast, have good coverage, and no erroneous results, there can be no successful CI/CD pipeline. The automated tests are usually divided into multiple “suites”, each with their own objective.

The list below gives a small overview:

  1. Unit tests: This is the suite that is run first, often by developers themselves before they add their changes to the repository. Unit tests normally test individual classes or functions.
  2. Integration tests: After unit tests come integration tests. These tests make sure that the modules integrated together work properly as an application. Ideally, these tests are run on environments that are similar to the production environment.
  3. System tests: These tests should test the entire system in an environment as close as possible to the real production environment.

Testing in a development environment

In the development environment, smoke testing is done. Smoke testing, also known as “Build Verification Testing”, is a type of software testing that comprises a non-exhaustive set of tests aiming to ensure the most important functions run properly. The results of this testing are used to decide if a build is stable enough to proceed with further testing.

To implement smoke tests, the testing team will develop a set of test cases that are run when a new release is provided by the development team. It will be more productive and efficient if the smoke test suite is automated or it can be a combination of manual and automation. To ensure quality awareness, the smoke test cases are communicated to the development team ahead of time, so that they are aware of the quality expectations. It is important to focus on the fact that smoke test suite is a “shallow and wide” approach towards testing.

Testing in a QA Environment

In a QA Environment, regression testing is done. Regression testing is the type of testing carried out to ensure that changes made in the fixes or any enhancement changes are not impacting the previously working functionality. The regression packs are a combination of scripted tests that have been derived from the requirement specifications for previous versions of the software as well as random or ad-hoc tests. A regression test pack should, at a minimum, cover the basic workflow of typical use case scenarios.

Best practices for Testers in CI/CD

  • Perform standard actions defined in the testing procedure & check the desired responses for correctness. Any failure of the system to comply with the set of desired responses becomes a clear indicator of system regression
  • Do a careful analysis of every defect based on the previous test scenarios to avoid a slip in regression testing
  • Ensure that the regression tests are correct are not outdated

Testing in a Stage Environment

In the stage environment, (similar to the production environment) performance testing is done. Any application performance test result depends upon the test environment configurations.

Performance testing is often an afterthought, performed in haste late in the development cycle, or only in response to user complaints. It’s crucial to have a common definition of the types of performance tests that should be executed against your applications, such as Single User Test, Load Test, Peak Load Test and Stress Tests. It is best practice to include performance testing in development unit tests and performs modular and system performance tests.

Testing in a Production Environment

In a production environment, sanity testing is done. Sanity tests are usually unscripted, helping to identify the dependent missing functionalities. These tests are used to determine if the section of the application is still working on a minor change. Sanity testing goals are not to find defects but to check system health. An excellent way is to create a daily sanity checklist for the production testing that covers all the main functionalities of the application. Sanity testing should be conducted on stable builds to ascertain new functionality/ bugs have been fixed and application is ready for complete testing, and sanity testing is performed by resters only.

Conclusion

This blog points out which environments are part of the CI/CD pipeline and how it is configured to successfully deployment of an application. It also explains the best testing types and approach used in each environment and its best practices.

Devendra Date
Contributed by Devendra Date, DevOpsTech Solutions Pvt Ltd. ,


Implementing Continuous Integration (CI) provides software development teams the ability to adopt a regular software release schedule with an automated error detection process for a more agile, safe, and low-cost DevOps approach.

When applying this approach in data management, automated testing is important for some of the same reasons as it enables teams to execute with the 3 testing drivers: agility, accessibility, and accuracy.

Data Agility Testing

By leveraging modern data management tools, the data ingestion process can be deployed at a more rapid pace (with metadata driven workflows or drag-and-drop code generation). Agility testing helps ensure proper front-end configuration is in place, which may appear daunting, but with appropriate environment access and testing jobs, the process can be quite simple. Data agility gives teams accurate data ingestion to produce data for use or storage immediately.

Data Accessibility Testing

To start, this process tests database connections and file URLs for accuracy. In advanced models, data dictionaries and glossaries are also checked for valid entries against ingested data. This driver forces governance practices to be in place before ingestion for fewer deployment and activation problems.

Data Accuracy Testing

This testing takes place downstream of the ingestion process ensuring validation, transformation, and business rule logic is applied. It’s often considered the most difficult testing to visualize and implement at the right scope.

Tackling CI may seem complex on the surface, but by following these 3 testing drivers, teams can ingest, transform, and apply business rules faster and with fewer issues while reducing manual touch points. Once CI is configured and test-driven development is in place, you will probably wonder what you ever did without it.

Robert Griswold
Contributed by Robert Griswold, Vice President of Professional Services for TESCHGlobal


When it comes to CI/CD systems, properly designing the overall structure of your system can often more effectively test your applications than poorly designed systems with excellent tests.

When you start designing your CI/CD pipelines, the first thing to do is break apart your application into as many logically independent components as possible. For example, your application might have a frontend, backend, and middleware layer. Your first instinct might be to create a pipeline for each component, but you usually want more granular control.

For example, let’s say that you deploy your software in a Docker container. You’ll want to independently test your software (ex situ), and also test the software inside your Docker container (in situ). This allows you to catch errors specific to your code, and errors related to your deployment platform. In this instance, these ex-situ and in situ tests require their own pipelines to comprehensively test both the application and the application’s deployment.

The next thing you need to consider is upstream dependencies. Upstream dependencies are triggers that cause a pipeline to execute. As a rule, each pipeline should have one primary upstream dependency, and zero or more secondary upstream dependencies. A primary upstream dependency is usually source code (but not always). Understanding the entire set of upstream dependencies for a given pipeline will make sure that your application always has the latest code, and will identify upstream changes that break functionality.

For example, let’s say that you have source code deployed as a Docker container. The git repository that contains your source code is a primary upstream dependency. When there are new commits to the repository, it triggers a pipeline that tests the individual code. This pipeline might also subscribe to another pipeline that contains dependencies, such as security scanning software.

After you understand how to structure your upstream dependencies, you need to consider downstream dependencies. Downstream dependencies are additional pipelines that are triggered when a pipeline successfully executes. When you start designing the structure of your CI/CD system, you need to take into account all downstream dependencies for every pipeline.

For example, if multiple pieces of software depend on a common module, anytime that common module’s pipeline executes, it should trigger those additional pieces of software. This guarantees that each source code component has the most recent version of all libraries, and it will identify any problems as early as possible.

Let’s go over an example to showcase how designing the high-level actions and organization of your CI/CD system will enable you to structurally find problems, even with suboptimal testing.

An engineer at our hypothetical organization issues a patch to fix a security vulnerability. After he pushes the code to git, this triggers a pipeline which successfully tests the patch. This then triggers 5 additional pipelines, because each of these pipelines depends on the new patch. 4 of these pipelines complete successfully, but one fails. The successful pipelines push their build products to QA. An engineer notices that the last pipeline is now red, and sees that the security patch worked, but it exposed an integer overflow bug in a Java library that needs to be patched. The engineer patches the library, and the CI/CD system automatically builds, tests, and deploys that code to QA.

This pipeline structure enabled our hypothetical organization to deploy the security patch to 4 applications in QA automatically, and show engineers exactly what is wrong with the last bit of code. Without understanding the proper upstream and downstream dependencies, applying the software security patches would have been extremely time-consuming, and it would not have identified the fact that one of the apps has a bug that directly conflicts with the patch.

This CI/CD system is able to effectively test large sets of highly dependent applications simply because of how it is structured.

David Widen
Contributed by David Widen, Pre-Sales Systems Engineer, BoxBoat Technologies,


The setup could be fast, but the maintenance may be hard—this is the truth of Continuous Testing.

Here is a straightforward story of continuous testing started from scratch. The tester first climbed through some learning curves in scripting an automated test, maybe an API test, maybe a UI test. He gained hands-on experience in using test frameworks and libraries (e.g. Selenium, Mocha, Calabash, etc.). Then he found some ways to run the tests in a clean environment, understood the concept of building an image and running in a container, say using Docker. After some trial and error…Pass, Pass, Pass. Great!

The boss saw his work and said, “Let’s run all the tests once an hour, send me an alert when any test fails.” He went to Google to search some keywords: Pipeline and Scheduler. Jenkins, GitLab, Heroku—lots of systems are providing a pipeline service. By choosing any system, he could run the tests all at once right after the deployment stage. A schedule is even handier to trigger the pipeline periodically. At the same time, he saved the test results to some kind of database, so he could use those results to compare the records with previous runs. Finally, when a test failure was detected, the program would send an email to the boss.

From then, this guy no longer needs to repeat the same set of manual tests every day, every few hours, overnight, marking ticks and crosses on a long long list, being bored and making mistakes easily. But one day, sad news came. The website broke with NO alert sent. Okay…Let’s check what’s going wrong. The test script? The built image? The pipeline? The test runner? The scheduler? The machine? The database? The alert sender? That’s why I said, the maintenance may be hard. Yet, if your project is going to be a long one, it’s really worth it.

Contributed by Joyz Ng, QA Engineer, Oursky


Continuous Delivery is a key practice to help teams keep pace with business demands. True Continuous Delivery is hard to do—it is impossible without confidence in the quality and fitness of the software—the kind of confidence only deterministic and repeatable automated tests can supply.

It is common for teams who have historically been dependent on manual testing to start with an inverted test pyramid in their automated build pipeline. This is better than no tests at all, but the fragility of these types of tests tends to surface quickly. False positives, false negatives, and little improvement in escaped production defect counts tend to erode trust in the test suite and, ultimately, leads to the removal of the tests.

I recommend that my teams start with a solid core of fast running, highly isolated unit tests. Every language has a popular unit testing framework which gives teams a high level of confidence in the quality and correctness of their code in mere seconds. This should be the first quality gate in a Continuous Delivery pipeline. If these tests fail there is no point in moving on to the next stage. Unit tests show that the code does things correctly.

Building upon the unit test suite should be a thin layer of integration tests. This proves that components play nice together and increases confidence that the application behaves the way that the customer expects. These tests should be executed one level below the UI – increasing their stability and minimizing their execution time. I encourage teams to use Behavior Driven Development style tests, which has the goal of proving that the code does the correct thing.

Given its popularity, it is typical for teams to jump straight into a Gherkin-based BDD framework, but the unit testing framework with which the team is already comfortable can be just as effective. Debates over tooling are endless. Ultimately, the ‘right tool’ is the one that gives the team the highest level of confidence that the code is fit for production with the least amount of friction.

Nick Korbel
Contributed by Nick Korbel, Booked Scheduler,


Many organizations have implemented continuous integration/continuous deployment already but not everyone has and thus many will be planning to do so. According to The State of the Software Testing Profession Report 2016-2017, 66% of respondents stated they were planning on implementing continuous integration and 64% of reported that they were implementing continuous testing. So, making quality tests from the very start is a vitally important part of the process.

If you are already past the planning stage, fear not, because you can still make test quality a key cornerstone of your CI/CD process. One innovative approach we see at Global App Testing is having QA become the gatekeepers to release. What this does is it allows developers to focus on what they do best – developing. This approach shifts the focus from CI/CD being about releasing whatever code is ready and makes it about releasing quality code. The distinction is very important and plays a big role in how your customers react to your products or services.

Often times we run into companies who use test quality for CI/CD in a way which compromises the quality of their product and service. This happens because of the assumption that developer unit tests and automation mean quality. This is, unfortunately, untrue. What is true is that automation of software testing before it hits the CI/CD server can only tell you if the code has passed or failed. It is surprising to us the number of companies we speak with who allow this pass/fail to be the gatekeeper for release! These tests are only designed to test known information (i.e. scripted tests). Pass or fail tests do not represent “quality” applications, products or services.

Organizations need to be thinking of ways to automate their functional testing areas and scaling them to meet the demands of a CI/CD environment. To do so, organizations should be looking at services or platforms that enable them to receive quality results in faster timeframes than traditional in-house methods might while increasing quality.

Nick Roberts
Contributed by Nick Roberts, Head of Research and Marketing, Global App Testing,


Escalated market trends and consumer needs have altered software development, and the traditional and conventional method of developing software cannot meet the new demands of the software industry. So, with the increased demand for quick software releases, there is a need to have the technical capability to reach the market with a competitive product that it verified high quality. This need has compelled the evolution of “Continuous Integration (CI) and Continuous Delivery (CD)”.

Undoubtedly, the emergence of the Agile methodology has accelerated the software development process to reach the market sooner and faster by splitting the whole software functionality into modules, and developing the modules as per client priority, delivering the software in short sprints much sooner than in the waterfall methodology.

“Faster time to market” is the ultimate business goal of every organization and Agile has acted as an initiator to drive the journey of reaching this goal. Still, there is a long way for Agile adopters to travel to reach their destination. Continuous delivery has become the new vehicle for them to reach their destination sooner by streamlining and automating the process build deployment, and delivery.

Continuous Integration is a process of automating build deployment leveraging a CI tool, like Jenkins, Bamboo, Octopus Deploy, Travis CI and others. When developers commit code with bug fixes or new features implemented, CI opens the door to CD in achieving a greater quantity of frequent software releases with improved software. However, there is a gap created between the two endpoints of CI and CD because of sluggish software testing practices, which needs to be bridged to deliver quality software faster.

The sluggish testing practices, which are manual and tedious or which didn’t align test automation with CI/CD processes, have to be refined to be continuous. Therefore, there is a need to have a continuous testing capability, achieved by blending the process of triggering automated regression test suites’ execution from the CI server with automated build deployment. This way, whenever a new build either with bug fixes or new features arrives, regression test suites will be executed leading to continuous testing, and, based on the bugs, decisions can be made on whether to go release or not. The ultimate business goal – achieving faster time to market, can be reached.

The release time for software can be further reduced by automating build deployment into production, which means having an automated process of decision-making for releasing or not releasing software, based on the severity of the bugs reported after continuous testing.

Venkatesh Akula Contributed by Venkatesh
Akula, CEO, ClicQA,


Agile development eliminates the main problem of traditional development: you needed to have a working system before you could test it, so performance testing happened at the last moment. While it was always recommended to start performance testing earlier, there were usually limited things you could test before the system is ready. Now, with agile development, we have a major “shift left” in testing, allowing teams to start testing early.

Agile development should be rather a trivial case for performance testing: you have a working system each iteration available, by definition, to be tested. In practice, it is not so simple. One issue is that you need to run tests, again and again, to make sure that there is no regression – so you need automation. In traditional development, you just needed to verify the performance of the system before release, so there was no performance testing automation to speak of (beyond using load testing tools).  And compared with functional testing, we have more moving parts such as complicated setups, long lists of possible issues, complex results (no pass/fail), changing/fragile Interfaces, and time/resource constraints.

While the trend in development is definitely towards Continuous Integration, performance testing remains a gray area. Some continue to do it in the traditional pre-release fashion while some claim 100% automation and full integration into their continuous process. We have a full spectrum of opinions of what, when, and how things should be done in relation to performance. The issue here is that context is usually not clearly specified even though the context is the main factor here. Depending on context, the approach may (and probably should) be completely different. Full success in a simple (from the performance testing point of view) environment doesn’t mean that you may easily replicate it in a more sophisticated environment.

I am very excited that my talk “Continuous Performance Testing: Myths and Realities” about these issues is accepted by the CMG imPACt performance and capacity conference https://cmgimpact.com/ – I hope it may trigger a meaningful discussion.

Alexander Podelko
Contributed by Alexander Podelko, alexanderpodelko.com,


I guess by now, everyone is convinced that test code should be written because it not only enables us to verify if the requirements from the analysis are met, but also that a change to one part of the code doesn’t break something else unexpectedly.

But just writing test code is not enough. We need to write good test code. The easiest measurement of this quality can be done by verifying if every production line of code is executed during the tests. Various frameworks like Clover, Cobertura, Emma, and JaCoCo have a measurement named line coverage. This gives us an indication of how much of the code is tested. And within our CI/CD tool, we have the option that the ‘build’ breaks when the line coverage value becomes too low.

However, this line coverage value doesn’t say anything about the test code quality. I saw on various projects that developers ‘just wrote some test code’ so that these criteria were met. But these tests had no relation to the requirements of the analysis, nor did they link with the way the production code would be used. What is the point of testing the getter and setter methods, for example, if they are generated automatically in most cases anyway?

If you use a coverage measurement value, the only meaningful value is the so-called branch coverage. If there are multiple paths the code can execute, like with an if statement, branch coverage verifies if both branches are accessed and that all conditions of the test condition occurred.

Another important factor is the use of unit and integration tests. Unit tests are of course easier to create because they have no dependencies on external systems. They are generally mocked by using frameworks like Mockito, JMockit, or EasyMock. Integration tests are much more difficult because we need to make sure that each test stays independent of the other tests, and we need more setup and maintenance for these external systems.

But you can go far in creating your unit tests. Whenever you have an interface, you can start mocking and creating dummy implementations. That way, you can implement dummy logic which helps in testing your code. Creating a dummy implementation of the JPA entityManager or CDI BeanManager can be achieved in a small number of statements. You can thus reduce the complexity of the integration tests and achieve some level of integration verifications.

Another option you have is to automatically generate an in-memory database based on the JPA Entity classes when you don’t have any database specific artifacts like stored procedures. This database can be cleared completely before each test and filled with some specific test records which are stored in an Excel file and loaded by DBUnit, for example. This way, the maintenance of the “external” database becomes easier when the structure changes and allows you to run your integration tests.

Rudy De Busscher
Contributed by Rudy De Busscher, Senior Developer,


Continuous Integration is an important part of the software development process—it’s the latest barrier between development and production. The crucial decision: “is this app ready to deploy?” should be made automatically as a part of Continuous Deployment. However, dealing with such decision-making logic is a complicated problem. Independently of application size and purpose, we’d like to see the similar outcome: 5-20 minutes to run tests and a build which is ready to be deployed to production. We also want to keep and aggregate different metrics like code linting or code coverage to ensure the stability of the code base.

In any case, staying below 20 minutes per build is a tricky requirement. For most CI/CD environments, we use Docker as it natively isolates different jobs and allows us to run tests in parallel. If one server is not enough – jobs can be executed on multiple servers using Docker Swarm.

Using multiple CI servers for different purposes is also a good choice. Tests should be split into groups so each of those groups can be executed in its own process. For unit/integration tests the fastest feedback is required, so they should not take longer than 5 minutes—if they do not pass, other tests may not be started. Such a setup is the most efficient for development as well as for delivery, however, it can be resource-intensive depending on the size of a project. If this is an issue, it is possible to allow trade-offs, like running a nightly group of the most common acceptance tests, however, such setup should be avoided, as it closes the road towards Continuous Deployment.

Mykhailo Poliarush
Contributed by Mykhailo Poliarush, Founder, sdclabs,


Continuous deployment is important because many software development companies release code quickly, and it’s important to release quickly while guaranteeing the quality of the software. For this reason, it’s is important to test in each level of development. You can test effectively and in-depth using CI Tools. I use Jenkins, which is an amazing tool that helps adhere to the CI/CD process. Jenkins provides an option to make pipelines and in each pipeline you can integrate the quality process, executing automated tests at each level (unit tests, integration test, and functional tests).

The pipelines are written in Groovy and you can add stages, for example:

  • Stage 1 = deployment to an environment (QA or dev)
  • Stage 2 =. Unit Tests
  • Stage 3 = Integration Tests
  • Stage 4 = Functional Tests
  • Stage 5 = Deployment to production.

It’s important to mention that all this process can be automatic (that’s the main idea). In the deployment to QA stage, for example, you can configure a webhook in Gitlab, Github, or any platform that you are using. The webhook is going to be listening when the developer team makes a commit. When this webhook detects a commit a job is triggered to start making a deployment to the environment. After the deploy ends, the tests are going to be triggered: first the unit tests, and if everything is working fine, the pipeline passes to the next stage (integration tests).

You can execute some stages in parallel if you want and this helps you to accelerate the deployment and the time that your tests take. If a test fails, the pipeline ends. For Example, if a unit test fails the pipeline is going to stop there. You can add a lot of stages but is important to have the pipeline designed the correct way to perform the automated test that is going to help you to guarantee the quality of your project.

As a test framework, I use Java and Selenium webdriver, and testNG to make my test cases. TestNG is a framework that helps me to make test cases at each level and I use Selenium web driver to test web applications with functional tests. TestNG provides a way to divide your tests into groups, so you can only have one project in which you can have different sets of groups (unit test, integration tests, functional tests, etc).  The functionality of this framework helps you to easily integrate pipelines.

I use Selenium web driver when the tests that I have to perform are on a website. Selenium is a tool that helps you to interact with the user interface of a website by simulating all the actions that a user is going to perform and testing the functionality of the system (functional tests).

Selenium works based on HTML but what about the rendering of the sites? How can we guarantee that a user is seeing the site correctly? If you want to test the rendering of the website you can use an amazing tool that named Galen, which is a framework that provides the opportunity to specify how the sites have to look in mobile devices or desktops. For example, you can define that a button has to be centered in the middle of the page when you are seeing the website in mobile devices and if you are looking the website in a desktop you can set that the same button has to be displayed to the right. Another tool that I use is Sikuli X for automating tests where the app is not a web application.

José A. Wolff
Contributed by José A. Wolff, autoweb


There is a calmness in the air, a tranquil feeling, a hum of contentment. No, I’m not hiking in the mountains, I’m in the office, therefore it can only mean one thing—the build radiator is green.

I’d like to hope this is a feeling the majority of readers have experienced. CI is considered an integral part of most teams’ approach to development these days, and CD gaining more and more popularity. With the adoption of CI has come this desire to see green, and in some cases, it’s mandatory to see green. Dare that build radiator ever be red, you don’t want to be around when that happens!

So we want green, but what does green actually mean? With CI, green usually means that the build successfully compiled, some unit tests were executed, the app was deployed somewhere and some further automated checks passed on the various seams the application may have. Or another way of putting it, nothing changed. We learned that nothing changed. All the knowledge that we’ve codified in those automated checks is still valid. Our algorithms and codified oracles are spots on, and the system is behaving in a way that satisfies those checks. Win!

However, the majority of those automated checks were probably written many months ago. The system many months ago isn’t the same system now. Do you even know what all your automated checks do? That doesn’t matter Richard, they are green! But it does matter—it matters because I can guarantee you that your testing strategy is being directly impacted by the results of those CI builds. When a build goes red, and the cause is an automated check, in the process of fixing that check you reaffirm your algorithm and Oracle with the application. The fix could be to the application, or it could be to the check itself, either or, you’ve refreshed your knowledge of that check and system behavior. When was the last time you did that for a green check? You likely also get a feeling from inside that tells you to do some exploratory testing in that area, which you probably will.

This is what I mean by the illusion of green. If you don’t know what all those checks are doing, what your codified oracles look like, why does green make us believe all is well? The reason is that it fits our current knowledge model of the application. We may not have looked at those areas of the application recently, therefore our model is still congruent with the automated checks. So, when they run green, internally we reaffirm our beliefs of the system and those beliefs are integral to our approach to testing the application.

So what can be done? Well, if you have some automated checks in place it’s very unlikely you’re not adding new ones when new features are added or improved, while also removing and refactoring old ones. What I find the majority of the time though is that we are only doing this in areas which are being developed at that time. So I like to encourage people to just read a few other checks while you are in the codebase just to repopulate that knowledge.

You can also challenge your knowledge. Make some notes about how you believe a part of the system works, then explore the automated checks you have in that area to see how they compare.

To finish, let’s briefly talk about red. Red is a mandatory trigger to do all the above. I like to consider a red build as an invitation to explore. Explore the application, the automated checks, the process, all the things pretty much to find out what happened so we can improve in the future. Constantly red is, of course, bad, but a red build every now and again is a great reminder to refresh and align our automated checks with our knowledge of the application.

Richard Bradshaw
Contributed by Richard Bradshaw, Ministry of Testing,


In the web development world, a combination of testing + CI/CD is of special importance. This is because of the divergences in the browser implementations and target platforms (desktop, mobile). These divergences lead to increased complexity of the maintenance and quality assurance processes – every feature needs to be verified and known to work on every supported browser/platform.

Managing this complexity without having clear information about the current state of your codebase means just blindly accepting risks of delivering bad/broken user experience. The only answer that exists to this problem is test-driven development and test-driven quality assurance. The results of running the test suite reflect the current state of the codebase and can be used for making informed decisions in the project lifecycle. With the test suite, you basically know whether the current source can be safely pushed to production right now.

And the second piece of the puzzle is CI/CD systems, that will automate the whole infrastructure of your project. Those tools will require a certain effort to setup—there’s no universal solution, as the deployment of every project is unique. But the reward is tremendous — deployment becomes a trivial “after lunch” routine, and new features and bug fixes are delivered to the user much faster.

At Bryntum, we accept no compromises on the quality of our products, and every product is tested nightly in every supported browser. Such focus on quality leads us to the creation of the Siesta testing tool with a focus on modern JavaScript-heavy web apps and seamless cross-browser automation. The latter feature we can definitely recommend to rank highly in your evaluations of the testing tools, as otherwise, you might be limited to certain browsers only.

Recently we’ve also launched the RootCause service for tracking user sessions and reproducing JavaScript errors on the end-user side. This service provides a valuable insight into how your system actually behaves “in the wild” on the end users’ devices. RootCause allows to implement a significantly faster bug fixing cycle – the information about the errors that happened on a user’s machine is available just in few seconds.

Contributed by Nickolay Platnov, Bryntum AB


Teams sometimes avoid setting up a CI system as they perceive it as complicated. But it really doesn’t have to be — even if you only have the CI system run your tests and alert someone when tests fail, that’s good enough to get started. Even the most diligent teams easily forget to run tests if they have to be run manually. As a bonus, after you have the system in place, you can easily add more functionality to it when needed.

One of the biggest mistakes that people make is ignoring the output from the CI system. Developers ignore messages about failed tests, or perhaps they even comment the failing test out in order to make the build pass. This is a big issue, as you’re essentially poking holes into the system: What’s the point of CI if all you do is ignore it? Issues such as tests failing in CI should be fixed as soon as possible without just winging it.

A good way to make people not ignore the CI system is making sure it’s not too noisy. Alerts should only go to the people they concern, and it shouldn’t generate more alerts than necessary. If people get too many alerts they don’t care about, they’ll easily start ignoring them entirely.

Jani Hartikainen
Contributed by Jani Hartikainen, CodeUtopia,


Ten years ago, performance testing was on the last-minute task list before software went live into production. But in those days, end users were less demanding when it comes to user experience. Performance testing was planned late in the project life cycle in order to test the application in a stable and representative environment. With agile, Continuous delivery, or DevOps, this approach is not acceptable anymore. Application performance as part of the global user experience is now the key aspect of application quality. “Old school” sequential projects with static Qualification / Implementation / Test phases that put off performance testing until the end of the project may face a performance risk. This is no longer acceptable by today’s application quality standards.

Agile and DevOps involve updating the project organization and require close collaboration between the teams. In these methodologies, the project life cycle is organized into several sprints, with each sprint delivering a part of the application. In this environment, the performance testing process should follow the workflow below:

Establishing a performance testing strategy

As the first and most important step of performance testing, a strategy should be implemented at an early stage of the project life cycle defining the performance testing scope, the load policy and the service level agreements.

Performance testing, being complex and time consuming with many aspects requiring human action (test design, test script maintenance, interpretation of test results), needs automation at every step of the test cycle to test faster and in a continuous manner. Hence, It is never possible to test everything, so conscious decisions about where to focus the depth and intensity of testing must be made to save time, and not extend the delivery deadlines.

Risk-based testing

Risk assessment provides a mechanism with which to prioritize the test effort. It helps to determine where to direct the most intense and deep test efforts and where to deliberately test lightly, in order to conserve resources for intense testing areas. Risk-based testing can identify significant problems more quickly and earlier on in the process by testing only the riskiest aspects of a system. With a methodology like DevOps, the number of releases is increasing but the size of the releases becomes smaller and smaller. That means that the risk is easier to measure when the release is smaller. You should only focus on the meaningful part of the application.

Component testing

In a modern project life cycle, the only way to include performance validation in an early stage is to test individual components after each build and implement end-to-end performance testing once the application is assembled. Since the goal is to test the performance at an early stage, listing all important components will help to define a performance testing automation strategy. Once a component has been coded, it makes sense to test it separately to detect regression and measure the response time and the maximum call/s that the component is able to handle.

Most of the application has lots of dependencies, so testing a single component could be a challenge because you will have to wait for all the dependencies. In order to let you validate the code, implementing service virtualization would help you to test each component without being affected by the other projects that are currently deploying, or enhancing their system.

Validate the user experience

Once the application is assembled, testing objectives will change. At some point, the quality of the user experience needs to be validated. Measuring the user experience is possible by combining two solutions: load testing software (NeoLoad) and a browser-based or mobile testing tool. It is important to perform end-to-end testing, but it’s equally important to ensure that the scope of end-to-end testing is not increased unnecessarily. Indeed, we need to remember that executing more tests, especially during end-to-end testing phase could affect productivity. The best way is to focus on the right and important things by performing a selection of end-to-end testing (cf. Performance Strategy).

Reduce the maintenance time of your scenarios

Even in continuous delivery or DevOps, testing the performance of an unstable system in a functional way) does not make sense because you will only generate exceptions on the applications. In that context, you would only prove that the application behaves strangely in an unstable situation. Functional testing needs to be done before any load testing (even on component / API testing). Reusing or converting functional scenarios is relevant to reduce the creation and the maintenance of your performance testing assets.

Reporting a green light for deployment

Component testing, end-to-end testing, will be automated by continuous integration servers or specific release automation products. Any testing activity needs to report a status (depending on several parameters such as Response time, User Experience, Hits per seconds, Errors, Behaviour of the infrastructure) in those products to enable or disable the next step of a pipeline.

Reporting a status in functional testing is obvious because the aim of each test scenario is to validate a requirement.

Devops will limit the end to end testing

With DevOps, it’s important to continuously validate the performance of the application without blocking the pace of delivery. That is the reason why end to end testing will be less frequently validated (depending of the risk, of course) and focus performance regression at the code level.

Henrik Rexed
Contributed by Henrik Rexed, Performance Engineer, Neotys


Quality Assurance has not always been an integral part of software development teams, but now it is essential. Organizations tend to have separate QA teams to assess whether the business requirements are met in full. The potential that lies in this situation
is usually explored by DevOps practitioners by integrating core QA functionalities with Dev teams to nurture a holistic growth environment with a focus on quality. But the question still remains as to how this can benefit you.

Scope: Releasing a high-quality product is one of the fundamental aims of DevOps—a quality -driven environment is necessary to achieve business goals. Software quality in today’s fast-paced development environments often refers to exhaustive
test coverage of your code in the form of unit tests, sanity tests, functional tests, system tests, and integration tests. Test quality is the most critical component as it offers a clear understanding of the total percentage of the product that is tested.
One of the fundamental aspects of adopting CI/CD practices is the implementation of automated tests that run once the code has been committed. A continuous testing cycle offers a regressive test suite to be performed after the basic unit tests have been
completed. This usually saves developers time in waiting for feedback on software usability. The process involves assessing the results of tests performed on the product from its code level to its usability level, which is the foundation for test quality.

Evaluation: Test quality measurement is not derived from coverage alone—assessing how thoroughly a test suite exercises a given program is not enough to determine test quality. Measuring the completeness of a software product is a complex process which incorporates evaluating everything, including unit testing, smoke testing, code, requirements, structural, architectural, functional (white, black and broken box) coverage, analysis of temporal behavior, regression, integration, and usability
testing.

Tools and methods:

Cobertura – Widely adopted statistical coverage measurement tool, mainly for Java-based software.

Coverage – Python-oriented coverage analysis tool.

Selenium – Open-source functional testing tool

UFT- Functional and regression testing tool offered by HPE.

Comet- Coverage Measurement tool often used in heavy industrial testing.

Contributed by MSys technologies,