In October 2022, A certain company reached out to me to write an article about DevOps and containerization. I never got to complete it in time and when I did, they didn’t respond.
I did a good amount of research for this article, provided I did not know much about the space. Now that I work for a DevOps shop, I thought it was only fair to publish this article.
I don’t think the writing is the best but it’s better to publish it than to let it rot in my drafts.
You can directly skip to the section about software development lifecycle if you’re already aware of DevOps. Despite, I think the immediate next sections act as a good primer.
Despite being the simple amalgamation of the words, development and operations, DevOps as a concept is inherently misunderstood. It isn’t a word well defined in literature. Although the common consensus however is that it’s ideally about a culture in your team which maximises ease and quality of releases and development-to-deployment speeds leveraging collaboration, tooling and other methodologies.
According to GitLab, DevOps is
people working together to conceive, build and deliver secure software at top speed. [it] enables software developers (devs) and operations (ops) teams to accelerate delivery through automation, collaboration, fast feedback, and iterative improvement.
DevOps itself is never concerned with the actual application business logic. Like a management methodology, it is about making development and operations team more integrated, making the software supply more efficient in the process.
Characteristics of devops
The meanings and henceforth the rules of DevOps change org to org but there are certain properties that must be aligned in the DevOps movement:
Collaboration and Shared Responsibility
From Rouan Wilsenach’s article about DevOps culture,
It’s easy for a development team to become disinterested in the operation and maintenance of a system if it is handed over to another team to look after.
From The Tao of Hashicorp,
focus on the end goal and workflow, rather than the underlying technologies. […] As technologies evolve and better tooling emerges, the ideal workflow is just updated to leverage those technologies. Technologies change, end goals stay the same.
Feedback and Communication
A team taking a more DevOps approach talks about the product throughout its lifecycle, discussing requirements, features, schedules, resources, and whatever else might come up. The focus is on the product, not building fiefdoms and amassing political power. Production and build metrics are available to everyone and prominently displayed.
What is Containerization?
Containerization, in simplest of terms, means isolation. It is one way of doing OS-level virtualization, which basically means running a program, be it a simple application or a whole another OS, on top of your current OS.
A lot of containerization software exist in the industry today. However the one that completely turned the landscape by providing a easier UI is Docker. For the rest of the article, when we say containers, we mean Docker containers.
The problems faced in DevOps
Development of almost all software can be boiled down to the 7 steps: plan, build, test, package, secure, release, deploy and monitor. This is called the software development lifecycle. A DevOps driven team would need to care about the efficiency of all these steps and sure enough, teams face issues in all of the steps. We’re going to see what are some of the common problems faced in some of these steps.
Planning involves gathering requirements, laying the roadmap, prioritising and making sure everyone is on board with what’s on the whiteboard. These are not exhaustive and still difficult.
Sometimes, a designer might want to see how feasible is an element, or a PM wants to see what a certain feature will be used like.
Only writing code (not programming) is probably the easiest part of the cycle. The difficult thing is setting things up, choosing the tooling, checking for cross-platform issues, etc. The problem increases during onboarding of a new hire in your team and you have to help them setup their systems with all the tools and technologies that you use.
Building quality into software is incredibly mandatory for anything remotely serious. Integration or even unit testing can help check for errors well before deployments, in the developers’ local machines itself. How do you ensure your tests cover most of the cases, inputs or scenarios without a lot of friction in setting them up or risking your own system?
This is the stage where you basically “build” your application, service or library for production usage by your end-users. You would
builda web application,
compilea service binary or something similar. This brings with it problems like; Where to build? What tools to build with? How to check if the builds are consistent?
After we have built our website, executable or something else, how to verify if it is not tampered with? How to check if a build is actually what we want?
We also want to check if the applications do not cause any harm to the underlying system running it.
Releases & Rollbacks
This is where the consistency of the build comes into play, especially if you have a distributed system. Rollbacks are also crucial when things go wrong. But they can be hard to implement correctly and sometimes lead to longer downtimes.
The problems described above demand a few things in common. Faster development, immutability of builds and security via isolation. With containerisation, all of these are addressed to in one way or the other.
Docker follows a declarative text-file based approach for creating images via its configuration file called the
Dockerfile. It is a one-stop shop for all your configurations and versions of tools. You can manage this
What is declarative? Instead of writing code that says “Start this app on port 8080”, you say “port = 8080” and start the container.
This feature of containers makes them absolutely easy for abstracting and composing different environments. Want a staging environment? No problem. Change some variables and spin up another container without worrying about what will happen to the production environment.
In the previous section, we saw the problems that come with packaging and security. Docker images are meant to be read-only and immutable. This works greatly for security as we don’t want our build to be tampered with. After an release image/container is built, it is pushed to a registry like GitHub Container Registry or Docker Hub.
Creating a Docker image or container also means that you don’t have to worry about configuration drift or dependency mismatch. Everything that is needed for a particular application will be bundled into the image, thereby giving consistent builds. Not to mention, since the
Dockerfile it’s basically just code, can be version controlled via an SCM like Git.
The benefit of images not only comes when releasing but also when rolling back inevitable bugs that get into the application. If your image releases are versioned, which they are in most cases, all you have to do to is pull up a previous version from your registry, spin it up and take the current one down.
Consistent development tools
You can ship not only your entire production application wherever and however you want to, but also speed up development with including tools inside your containers. New developer joining your team? New container for them.
Tools like VSCode Dev Containers make this process super easy. You can have project wise development containers and all a new developer has to do is pull and start the container.
Testing via Isolation
We discussed the importance of testing and its problems. How do we properly ensure that tests are made against as rigorous scenarios as possible without harming our development or CI system? Run them inside containers!
ory/dockertest provide a first-class experience in spinning up entire databases inside containers and testing your code in them!
Security via Isolation
As we discussed in the earlier paragraphs, when we use containers, we package and ship applications with their dependencies, rather than having to rely on the underlying operating system. This significantly reduces the risk of vulnerabilities in the application. This also protects against the dependencies installed in the OS being exploited.
With proper auditing, at least a smoke-test verification of all the dependencies that goes to our containers, we can protect our apps against a lot of threats.
Another thing to note is that each container runs in its own isolated environment. This isolation prevents containers from interacting with each other, which reduces the chances of security breaches or accidental leaks. You can start a billion containers and be sure that if one fails, others will remain intact, or at worst, fail because of different reasons.
Containers are much lighter than their heavy virtualization counter-parts - virtual machines.
In traditional virtualization, something called hypervisors emulate hardware to run Virtual Machines. Containerization on the other hand is about just running another process on top of your base OS using various kernel mechanisms like
The benefit that comes with ditching the hypervisor can be seen in real life examples. For example, Google starts around 2 billion containers per week! And that was in 2014. That gives us an idea about how lightweight they actually are.
Containers also allow us to limit the resource usage like memory or CPU with easy configurations.
A last point that might be a bit stretched but can definitely become a reality is containers can be run by anyone. The designer in your team doesn’t have to learn about
Webpack to look at the application. They just need to install Docker on their machine, download the code, and start the container.
Albeit not a better solution over staging environments or tunnels, teams actually use this sometimes!
Containerization is often cited as one of the key technologies that enables DevOps culture. They allow for much more rapid and consistent deployment of applications, and they make it much easier to manage dependencies and isolate applications from each other. This makes it possible to deploy applications much more frequently, and to do so with much less risk.
Containerization make workflow automation better by making the CI more efficient with isolated testing. They make packaging and releasing a breeze with images and containers. They enable iterative development and quick feedback by making it easy for anyone to reproduce environments suited to the scenarios being tested.
In the end, it’s safe to say that containerization is a capable catalyst for incorporating DevOps methodologies in the team. It is no silver bullet but they make complex application deployments quick and reliable.