“The Cloud” is often marketed as the best alternative to managing servers yourself. For the most part, this is true, given you’re a corporation that can afford to buy-in to managed hosting and trade-off the need to hire system administrators to wrangle their RHEL or Windows Server machines. But if you’re not a corporation, and you just want to run a couple of side projects and maybe a small business, the value proposition may not hold up.
“The Cloud” is basically managed hosting at the OS level. You pay for a set of VMs that should be able to scale up and down as and when you need it. This is great, especially when you get an influx of visitors to your site or users on your app. However, you pay a premium for all that automation. It will be cheaper for certain use-cases, but there are alternatives.
It’s easy to forget that servers actually exist underneath “The Cloud”, and what’s more you can just rent one of these physical machines instead of just part of one. In this article I will share my dedicated server setup and how I’m making use of modern tooling that is often associated with cloud and cluster technologies such as Kubernetes on a couple of dedicated servers.
I run around 40 services ranging from public APIs to websites
These services all run as Docker containers orchestrated with Docker Compose. I have a “no scripts” policy, which means nothing on that machine is controlled via Bash, Python or any scripting languages. Everything is configured declaratively and managed using actual for-purpose software.
Some of these ideas are similar to that shared in The Neomonolith by Adam Shreve which discusses the idea of an application model where the entire stack can be duplicated any number of times to alleviate load. The deployment method I employ is a version of that concept at the infrastructure level. I currently manage two dedicated servers but I don’t deploy applications that communicate across those boundaries, so I don’t need something like Kubernetes or Docker Swarm to provide a network abstraction across machines. So, instead, I deploy small composable components in the form of Docker Compose projects.
Everything is a Docker container. The reasoning for this is partially covered in Infrastructure-as-Code for Personal Projects but the short version is:
- A unified configuration method that’s declarative
- Composable service model
- Portability across multiple operating systems
I used to deploy things with little makefiles that would do a simple
docker run ... but once I hit 3 or 4 services, this became quite cumbersome to maintain and update throughout the lifecycle of services.
After that, I moved to using Docker Compose to declare my configuration beforehand in a YAML document. I had a handful of websites and an API service in one document before I decided it would be much more maintainable if I split up the file into per-service documents.
A “service” is all the components required to run that service: API, frontend, databases, etc. And yes, that means each service gets its own database. I was using a single MongoDB instance for three services but I decided to trade off the overhead of running three separate MongoDB instances for the sake of portability and security.
Given I run quite a few web services, I need a web gateway. I was using Nginx many years ago, then switched to Caddy for painless SSL but after a while it was proving quite problematic with systemd and cumbersome to configure every time I added/removed a service so I switched to using Traefik due to the automation features and tight integration with Docker.
Traefik is a fantastic piece of software. It is driven by service annotations, which in my case means Docker container labels. It uses these annotations to reconfigure its HTTP routing on-the-fly. So, if I spin up a new website container and label it with:
Provided that subdomain is already pointing at my server (more on this shortly), Traefik will just start routing traffic to that container. It also handles automatic SSL via LetsEncrypt (but be careful, a slight misconfiguration will result in Traefik inadvertently getting you banned from LetsEncrypt for an entire week!
Domains are also handled in a declarative way. I use a tool from StackExchange called dnscontrol which allows me to configure all of my domains and subdomains in a single file. It’s then used to fire API calls to Cloudflare using the same infrastructure that controls everything else which is described in the next section.
What sits in the middle is an application I wrote called Wadsworth. The goal of this service is to watch a set of Git repositories for changes. If it detects a change, it runs a command inside the repository. This allows me to deploy and update the configuration of any of these applications simply by committing to a Git repository.
Wadsworth is configured to run
docker-compose up -d for services and
dnscontrol push for my domains. Since Wadsworth can run any arbitrary command when a repository receives a commit, it’s a very versatile tool that could be used to automate all kinds of things. This provides the user the ability to apply GitOps and IaaC concepts to almost anything.
For monitoring, I use Prometheus and Grafana combined with Node Exporter and cAdvisor for system metrics. These tools have helped me uncover issues with services and bottlenecks with resources.
This has been a rather quick introduction to my setup (written hastily, I might add) so it’s not at all all-encompassing. I do hope to rectify this in a future post (series?), in which I will go into the details of each component in a bit more detail. This article was more of a general overview along with the whys b