🇬🇧🇺🇸 What's the fuss with Kubernetes?

It's no secret that nowadays, Kubernetes is dominating the devops landscape, and slowly is becoming the de-facto standard for managing containerized apps at scale.

I am going to write a little about this, giving some pointers to people who are not (yet) experts in Kubernetes to get them started and some insights on how it all came to be like this nowadays.

In order to understand the "why" of Kubernetes, we will need to take a little history lesson, to see how the application delivery landscape evolved over time.

Deployment 101: scripts/binaries running on a single machine

Applications are bundled either in scripts, if we are talking about scripted languages such as Python, PHP, Ruby or compiled binaries if we are talking about compiled languages such as Go, C/C++, Java, .NET.

In the old days (or even nowadays) the easiest way to deploy an application is to get a server (physical bare metal or virtually, in the cloud), put the application in there, prepare the operating system with the necessary libraries and other dependencies, and start the application.

This method is easy, supports a decent amount of workload and we can always scale the server vertically (add more resources) if the current capacity is not enough.

Advantages :

  • easy to do
  • easy to understand

Disadvantages :

  • impossible to do zero downtime upgrades
  • single point of failure (if the server goes down, the whole application goes down)

This method is simple and straight forward, but for an application that has some uptime requirements, it impacts our maintenance capability (can't do too many deploys because it will cause more downtime), deployments are scheduled at hours that have very low traffic, so the small periods of downtime won't affect many users (usually during weekends and/or at night).

Scripts/binaries running on multiple machines

The next step in deployment evolution is to parallelize the application and put it on multiple machines.

This resolved the issue with having a single point of failure, but now we deal with new kinds of problems:

  • the machine environments might differ (eg. if we don't run exact replicas of the machines, some versions of some dependencies might differ and we might get slightly different execution environments).
  • the differences in environments becomes more apparent when running the application locally for development.
  • doing zero downtime upgrades is still kind of hard, becausae we have to build some tools to redirect the traffic intelligently from the old instances, to the newer instances.

Some of these problems can be mitigated using some kind of automation (eg. Ansible for making sure the environments are setup in the same way with the same dependencies).

The biggest drawback of this solution, by far, is the waste of resources. The application will not have constant workload, and in the quieter periods of time, most of the available resources will be idle. But the company still has to pay for them, after all, we pay for servers by the hour, not by how much we actually use (this can be somewhat mitigated by moving to a serverless architecture which will incur costs only for the actual compute usage we have, but this requires completely rewriting/re-architecting our app, which is considerably harder than just moving the application from one server to another).

Containerization

To tackle the environment discrepancy and application isolation, containers appeared and became pretty popular. Nowadays they are considered the standard solution for running applications in a cloud environment, because:

  • we can have the same application running in different environments, since the dependencies are bundled in a transferable image which can be reliably built from scratch using Dockerfiles.
  • the applications that run inside containers are completely isolated, so we can cram in multiple containers on the same server without worrying about incompatible dependencies or that they will try to access the same resources at the same time, causing issues.
  • we can limit the resources used by each containerized app, so we can figure out how many instances of different apps can go on a server.

By using containers, we win flexibility, but in a production environment it is much harder to manage them all. After all, more moving parts are harder to manage, when we make updates we need to take them all into consideration, and one little piece acting up can cause a domino effect.

Now that we have most of the issues from the previous deployment methods sorted out to some degree, now we step into the land of complexity. With many moving parts and communication patterns between our services, we need to find a way to find (or create) some order from the impending chaos.

Enter container orchestrators

With many small containers running on a bunch of servers, the need of specialized software to manage all this became more and more apparent.

The demand being there, the market provided. Specialized software that knows how to organize and manage a lot of containers at once, abstracting small details away from us, humans, appeared, and they are called container orchestrators.

One such software is called Kubernetes, but others do exist as well (eg. Nomad or Docker Swarm).

Container orchestrators abstract away the servers, and the way containers run, and we, as humans, are only left to organize our applications using their resource definitions.

Now I will switch to using Kubernetes specific terminology, so you can get a better grasp on what is actually happening.

When we want to deploy a containerized application, we need split and organize it into multiple Kubernetes specific resources. The main building block are Pod s which are the representation of a set of containers that are always running together on the same machine. Then you can organize them in structures that are managed by Kubernetes and define the desired state of your app either through Deployment s (a bunch of containers running at the same time, in a horizontally scaling mode, with a desired number of replicas) or DaemonSet s (one replica per machine, to achieve the best uptime, but this mode is more suitable for cluster management workloads rather than applications).

Then, after you have your containers up and running, you need to make them communicate with each other. In Kubernetes you can do that via Service s, where you expose certain services under a certain cluster-wide name, so they can serve traffic internally in the cluster. This makes developing and integrating microservices a breeze.

But what about configuration? Just as the 12 factor app methodology states, the same app should run the same code in however many different environments we want. Development, production, staging, QA, etc, doesn't matter. We should have the same containers running. The differences between environments should be injected into the containers by the environment itself either via environments variables or files.

Kubernetes offers a way to do just that using ConfigMap and Secret s, where we can store in a centralized way our configuration keys, and then inject them as we please to each container we run.

Kubernetes offers a lot more, but I consider these to be the most important building blocks, and this should be enough to get anybody started.

Conclusion

The application deployment landscape evolved a lot since the inception of the internet, and our tooling today, although it became more and more complex, it managed to solve a lot of the issues teams and companies were facing in the past. This allowed us to ship better more reliable software that can serve much many users than ever.

But to do that, we need to handle the complexity of the environment the application runs in. Sure, for simple monolithic apps, the deployment process is rather easy, but for a company that is scaling and has a lot of moving parts deployed that need to collaborate between them seamlessly, operational complexity surely became a problem.

Nowadays we have container orchestrators such as Kubernetes to aid in this problem, and make our lives a little bit easier when it comes to managing applications at scale.

I, for one, really enjoy using Kubernetes for all my apps, even though they are a single monolith. The alternative is to use the plain old "deploy on a single machine" method which really hinders the scaling up capabilities of an app (and company).

At Vuuh we're making use of Kubernetes intensively, and it really simplifies the management of our services, and allows us to focus more on developing features rather than operating the infrastructure. The managed Kubernetes offerings of the public cloud providers also played a big role in the adoptation and rise in popularity of such tools, for sure.