Containers are everywhere. The speed at which container technology has gained traction and has been accepted by the community is unprecedented. That said, up until now containers have largely been a grassroots movement led by developers, and their adoption by other teams and management is only now kicking into gear.
Beyond understanding the container technology, there are four elements to containers that development and QA managers must know about when embarking on containerizing: Containers and DevOps culture, runtime features, orchestration features, and cost. Your answers to these criteria will dictate go/ no go for containers.
TL:DR containers are a solid go.
What is Container Technology?
Containers ensure that application code will run regardless of where it’s deployed. They eliminate inconsistencies by wrapping up software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries.
Container engine (AKA runtime) such as Docker, rkt, Kubernetes-CRI, etc. is the core. It provides an additional layer of abstraction and automation of operating-system-level virtualization such as cgroups and kernel namespaces (in Linux case), and a union capable file system to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines with hypervisors. As actions are done to a Docker base image, union file system layers are created and documented, such that each layer fully describes how to recreate an action. This strategy enables Docker’s lightweight images, as only layer updates need to be propagated (compared to full VMs, for example).
Containers and DevOps Culture
As Continuous Delivery becomes the defacto industry standard, with rapid and incremental code releases, we must promote cultures, methodologies, and technologies that enable us to achieve faster and faster release speeds. DevOps as a culture is one of the most progressive methods to achieve speed. It empowers teams to self-manage everything, from design to scale “You build it, you ship it, you own it.” From a pure speed perspective, this is fantastic as it eliminates bottlenecks in the release process. From a management perspective eliminating bottlenecks also means risking quality and adding “you broke it” to the process. This is where containers step in to reduce quality risks.
Container technology is about portability. Containers give you the flexibility to be agnostic to practically everything on any level: languages, technologies, and even platforms.
- Developer: Build once, run everywhere.
- QA/ testing: Consistent and synchronized environments between test and production.
- Sys-admin: Config once, run anything.
- Ops: One-stop-shop for building, shipping and running/scaling straight software out-of-the-box. Allowing focus on features, bugs, and shipping better software rather than setup and maintenance of environments and tools.
Containers make life easier for developers, sys-admins, and ops by reducing config variables and time-consuming setup and maintenance tasks. For QA and testers, containers are a game changer because they reduce risk and increase efficiency. Think how much efficiency you gain, and risk you eliminate just by being able to ensure that the system configuration of your test environment is identical to your production environment. As today’s software architectures are complex, written in various technologies, and run on multiple environments for each development iteration, we need containers to manage consistency in the endless matrix of technologies and environments.
Containers are only one component of achieving a DevOps enabled culture. Keep in mind that if that is your goal you need to invest in the following:
- Small heterogenic units/teams/companies/tribes with a well-defined area of responsibilities.
- Teams self-manage everything from design to scale.
- A CD pipeline supporting independent deployments as a service.
- Continuous Testing: Many small and relevant automated tests running against each component build, as well as integration, builds continuously.
- Smart quality metrics (such as Quality Holes, Code Coverage, Test Quality, etc.) to automate build promotions pursued for any microservice and from any test automation suite (unit, integration, E2E, etc.).
Container Runtime – Gain Creativity, Quality & Speed
Container runtime is basically the CLI that you use to manage your images build, ship, and run phases. To start experimenting, you’ll need to install Docker/ rkt as your container runtime and write a Dockerfile at the root of each of your projects (make sure they are portable and configurable via env-vars). Then, just push it to your Docker registry. If you don’t have one, dockerhub / quay.io can set you up fast and for free, and so can your cloud vendor (AWS/GCE).
Once your services stack is up and running within containers, it’s replayable everywhere as is, with no further setup needed (other than container runtime on target VM of course). Now you will enjoy:
- Accelerate Developers and Testers: Stop wasting hours setting up dev/ test/ stage environments, spinning up new instances, and making copies of production code to run locally. With containers, you simply take copies of your live environment and run them on any new endpoint running a container engine.
- Speed Up QA and Test Automation: QA must switch from a linear process to support non-linear deployments. Meaning, that to avoid bottlenecks, containers can be used to parallelize tests to run all at once against any permutation or environment. Compute cost will not change as running a single instance for an hour, is equal to running six instances for 10 minutes each.
- Empower Creativity: The isolation capabilities of containers free developers/ testers from constraints: they can use the best language and tools for their application and tests infrastructure without worrying about causing internal tooling conflicts.
- Eliminate Environment Inconsistencies: Packaging an application in a container with its configs and dependencies guarantees that the application will always work as designed in any environment: locally, on another machine, in test or production. Without having to worry about installing the same configurations into different environments.
- Ship Software Faster and at Scale: Containers allow you to dynamically change your application, from adding new capabilities and scaling services, to quickly changing problematic areas. This enables developers and testers to develop and test applications quickly within any environment with a simple, consistent pipeline.
Container Orchestration – The Last Piece Of The Puzzle
Now that containers have become prevalent several cloud management pitfalls, such as continuous testing, automating deployment, auto scaling, and management of containerized applications, require attention and resolution. This is how the orchestration technologies like Docker Swarm, Google Kubernetes, AWS ECS, Mesosphere/ DCOS, HashiCorp Nomad, etc. (“The Container Orchestration War Has Begun) were born. Container Orchestration completes container runtime by helping you manage their deployment and lifecycle. Let’s see what it brings to the table, specifically examining Kubernetes:
- Automatic Packing: Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best effort workloads to drive utilization and save resources.
- Self Healing: Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don’t respond to your user defined health check, and halts advertising them to clients until they are ready to serve.
- Horizontal Scaling: Scale your application up and down with a simple command, with an UI, or automatically based on CPU usage.
- Service Discovery and Load Balancing: No need to modify your application to use an unknown service discovery mechanism. Orchestrators give containers their own IP addresses and a single DNS name for a set of containers and can load balance across them.
- Automated Rollouts and Rollbacks: The orchestrator progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, it will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
- Secret and Configuration Management (e.g. access tokens etc.): Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.
- Storage Orchestration: Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
- Batch Execution: In addition to services, orchestrators can manage your batch and CI workloads, replacing containers that fail, if desired.
Show Me the Money
Of course, time is money, but Docker can also save you hard, physical dollars as it relates to infrastructure costs. Studies at Gartner and McKinsey cite the average data center utilization as between 6% to 12%. Quite a lot of that underutilized space is due to static partitioning. With physical machines or even hypervisors, you need to defensively provision CPU, disk, and memory based on a high watermark of possible usage. Containers, on the other hand, allow you to share unused memory and disk between instances. This allows you to pack many more services onto the same hardware, spinning them down when they’re not needed without worrying about the cost of bringing them back up again. If it’s 3am and no one is hitting your Dockerized intranet application but you need a little extra horsepower for your Dockerized nightly batch job, you can simply swap some resources between the two applications running on common infrastructure.
The bottom line of this post is that containers are perfectly aligned with the complex software architectures that we use today, the methodologies and cultures we work in, and the frequency of releases that we as businesses must meet. But more importantly, they are ideally placed to combat the issues that arise as a result of faster and faster release speeds, namely losses in software quality.
In our next post, we’ll focus on the implications of containers in QA and walk you through how to improve quality with a simple yet elegant use case of containers in our pipeline.