Containers are sort of like a type of virtualization. Virtualization is a way to make lots of computers run on one computer, virtually. It makes virtual hardware, and then you install your operating system on it (Windows, Macos, or Debian). This is currently the main method of segmenting workloads on a computer. Another way to do this is containers.
Containers allow you to break up your workload on your server into smaller parts than virtualization does. Containers are designed to run with the host kernel, instead of it’s own on virtual hardware. Point being, there is no virtual hardware in a container. It is segmented using Linux cgroups and namespaces. Docker is the first very popular container application.
Docker is almost like a standard at this point. When you drive down the road, it all makes sense because we all have the safe traffic laws we follow. We know a red light means stop. We know a green light means go. Docker attempts to achieve something similar in how you package and develop an application. Most open source applications we find out have a native Docker version of it. What this means is that if you learn how to drive (Use docker) you can deploy a much larger number of applications.
Once you start deploying lots of containers, it gets confusing. Then you need a container orchestrator. When you have a Docker workflow for deploying applications the fastest and easiest method is to use Docker Swarm.
Docker Swarm allows you to manage a cluster of servers, so more than one computer, all working together. In our system, if one computer goes down then your website automatically moves to another computer that is running. We are still working on this actively to make it more resilient (I’m looking at you reverse proxy...).
The big problem we’ve run into with Docker Swarm is volume management and ingress. Docker does great on your compute layer, but once you talk about getting things into your Swarm, and where to put the long-term storage, then it actually because a hard problem to solve.
We solved the volume issue by using NFS and bind mounts onto our servers. We have broken up our compute into a Docker Swarm workflow and our volume/data management into a Truenas workflow. This works well, is fast, and allows for some unique ways to manage and move data around the datacenter.
The other problem has been solved somewhat with the Nginx Proxy Manager (And see our previous article on, “What is a Reverse Proxy?“), but it isn’t complete. There are some nuanced issues with Docker networking that strip out the client IP address. This may not sound like a big deal, but it is. When an application like WordPress doesn’t get the right IP address, then your analytics are all wrong. It just reports that there is one user over and over again visiting. This is bad from a security perspective and general marketing perspective.
We are going to implement this solution to fix the IP address problem in Docker ingress, then we can implement loadbalancing with HAProxy through PFSense and we’ll be off and running with a more redundant and resilient ingress setup. What this means is, less downtime for an application of a server goes down.
Hire Us for your Docker Swarm project.
Owner, Altha Technology