What you’ll need: 1 Developer + 1 DevOps Engineer
How long will it take: 2 weeks
Payoff: Improved provisioning by 333.33% (or alternatively 30% of the previous provisioning)
Multitenancy was a bit more difficult to solve as it required DevOps assistance. Here we needed to solve environment provisioning: the average amount of time that a developer/ tester had to wait until his/ her environment was spun up. Our starting average T1 was ~10 minutes.
By implementing multitenancy, we expected to achieve fast environment provisioning, reducing downtime from 10 minutes to 3 minutes. In reality, we achieved, on the first implementation 5 minutes provisioning and on the final implementation 3.5 minutes provisioning. This was after final optimizations that included, initiating namespace via PR forecasting and retaining reserve compute power to support peak outlier concurrent testing.
An improvement from 10 to 3 minutes might not sound like a lot, but when you take into consideration how many developers you have and how many times they are committing code a day, it adds up. For us on a calm day were talking about 15 developers committing code twice daily, meaning 300 minutes idle time a day. Now imagine a super stressed day.
Code Coverage vs. Test Coverage | Download >>
*Minor Issue: Private build images
Multitenancy at such an early stage, which is what shift left testing de facto requires, requires PR level private image maintenance. Meaning that now we need not only to manage master images but, branch, tag, fork level branches. This is much more difficult to manage and maintain:
- Your build servers need to be able to support automated image building on all levels.
- Memory complexity cost: your cost has now multiplied by a factor of 4.
Neither a nor b are dealbreakers for container registry. What is a dealbreaker is thresholds. Now that you are working on a larger scale it is only a matter of time until you hit your limit and break your pipeline. All you need to do to combat this in advance is set up alerts and delete obsolete images.
Friends with Benefits
We set out to reduce our release cycle but, there are a few more things that we achieved along the way that are worth noting:
- Safe environment replay in production: Before we set out, our old environment provisioning was based on VMs. Now there is nothing wrong with VMs, but it is not built for the workload that we deal with at SeaLights. As a result of containerization, our CI is now built in orchestration meaning that we now enjoy all of the out-of-the-box advantages that container orchestration provides. Meaning that because we implemented a solution for common application issues already in the testing stage rolling out this same solution to production is straightforward and relatively safe because of compatible environment configurations. Read more about investing in containers.
- Consistent environments: One of the more annoying issues that any developer/ tester can attest to is inconsistencies in environments. There is nothing more annoying than having a build consistently fail just to discover that the test environment is missing some of the production environment configurations. With containers maintaining all the different environment configurations and enforcing consistency is easy work.
Not only is it easy work but it also killed some of our false negatives and bugs. Once we implemented this, we realized that 17% of our “fails” were false negatives as a result of environment inconsistencies and, 9% of our production bugs were a result of environment inconsistencies. Today we no longer have to deal with these issues. Woot!