There are SO many options for running Docker containers today and if you are new to DevOps, you may find it a little overwhelming. Along with these options, there are also strong opinions about what you need to do, all while you try to figure out a solution to Your problem.
Sometimes all you need is to run multiple Docker containers on a single server, and choosing between the options could seem a little intimidating. What if you choose the wrong solution for a production server?
I am sure that a quick search immediately shows few Docker orchestration solutions:
Which one is best for a single server deployment? Is it production-ready? What about scaling?
I think that when it comes to running multiple containers while limited to using just one server, the main thing is to keep it simple. My initial thought is, of course, why just one server, but then again, sometimes it’s just not up to us.
So what does simple mean?
- You don’t want to spend too much time setting it up.
- Don’t over-engineer the solution.
- Favor simple configuration
- You are not Google / Facebook / Linkedin or the rest of them.
- Remember that someone else may need to take over the project.
A common fallacy is that everything has to scale. We look at what the big players are doing and try to copy their solution, often, without realizing that have a different set of problems. Their solutions may be an overkill for your limited use case.
Our criteria for a working solution
- It has to be easy to launch new containers.
- You should be able to update the configuration easily.
- Containers should be able to find each other - Service discovery
- Containers should be able to communicate with each other.
- The configuration should be readable.
- Containers should stay up and running if healthy.
You can see that I didn’t include requirements such as multi-node support because we only have a single server to work support. I will usually try to plan for scale, but I’ll dive into that later.
Let’s look at the options we have on hand:
The good old “docker run” command launches the containers. While it does cover most of our requirements, it falls short when it comes to set up, configuration and readability. Having to remember the parameters for each container is an annoyance and makes it hard to figure out the architecture just from looking at the command line history.
Docker Compose took the docker run command and wrapped it under a declarative configuration file. This way, you have an easy way to describe your environment, networks, volumes, and dependencies - making modifications easy. Of all the docker orchestration solutions out there, I think this is the most “readable” option.
Docker Compose also answers all of our requirements:
- One command to launch and update multiple containers.
- Containers can find and talk to each other on a network in a predictable way.
- As mentioned before, the simple YAML configuration file is readable, and anyone can understand the architecture reasonably quickly.
- You can set rules as to when to restart containers if their health check fails or they exit with an error.
What it lacks, unless you use Docker Swarm, is the ability to span it over multiple machines. However, in the single-server use case, it’s not a requirement.
With version 3 of Docker Compose, you can use it to deploy to Docker Swarm as a “stack,” but that comes with an overhead of running the Swarm, which brings us to;
I have to admit that there is something elegant with the way you can launch a cluster of on multiple servers with Docker Swarm. The learning curve is far from what you see in Kubernetes, and it looked promising initially.
Many companies bet on Docker Swarm because it was easy to get started with, and you could have containers running in clusters spread across multiple servers in minutes. As time went by, the same simplicity started to be an issue as use cases became more and more complicated, and some issues are just not solvable. As of today, the market shifted to Kubernetes. I would bet that that Docker Swarm is not going to be the right choice for future projects. (sorry Docker Swarm contributors, you did an outstanding job)
Even though our requirement is for just one server, an argument can be made to think ahead about scale. But then, if you are already thinking ahead, Swarm is not a good option anyway.
It’s everywhere, and everyone is talking about it, all the companies are either running it or “transitioning” to Kubernetes. If you check it against our requirements, it does answer all of them and more. Much much more, and that is one of the problems.
There is a case for using Kubernetes in most solutions, but it does have its shortcomings. Kubernetes had been built with scale in mind and resembled the way Google did things. At Google, servers were abundant, and you had workloads spread across multiple servers almost by default (I’m exaggerating). The notion of stateless, sometimes ephemeral, application instances fits with Docker and Google built a monster around it.
As I said, there are a few things to consider there:
- The learning curve. If you are just starting out, it’s not going to be easy nor fast. There are many concepts to grasp and multiple ways of doing things. It could get a little overwhelming.
- The documentation is well; how should I say it? Not that great. Do you consider reading bug tracking issues as documentation?
- Running the Kubernetes cluster itself - You have to install and maintain it. Unless you are using a managed cloud provider solution (EKS, GKE, AKS, etc.) it’s going to take much effort.
- Kubernetes was built to run on more than one node, so there is not much point on running it on one server. Some say that a single node Kubernetes cluster is a good idea while developing locally. However, projects tend to grow and require more and more resources, moving you away from using just one server.
- You don’t really need all of these features for now.
For most of my clients, I would definitely recommend Kubernetes once they outgrow the “just one server” server stage. Yet for most local development and small solutions, I would steer away from it.
Why I recommend Docker Compose
It must be evident by now that I’m pointing you towards Docker Compose for running multiple containers on a single server.
The initial setup is easy, and you can learn as you go. When the time comes to scale to multiple servers, you already have a sound configuration file that represents your architecture well enough to use as a base. The time you spent setting Docker Compose doesn’t have to go to waste and is a useful reference for Kubernetes.
On top of it, it’s going to be easy to run and maintain. It gives you more time to work on backups, monitoring, security, and the rest of the tasks that you have to do for production.
Just go ahead and run with it and get it done quickly. When the time comes to scale, you can switch to the flavor of the day Docker orchestration solution, but for now, KISS.
And if you need any help deciding which option works best for your use case, or want help getting started, feel free to reach out to me here.