Containment Strategies

It’s been a year and a half since , and there’s clearly in that space at the moment, so I thought I’d jot down the status quo from my perspective.

After all, I’ve been using containers for well over three years now, and it’s impossible to have spent this long doing so without forming an opinion – even if most of my use has been to spin up and tear down development environments of various sorts (some of which I’ve posted online), I’ve also been arguing the pros and cons of using containers for live services, and we’ve had a bunch of internal services running off vanilla LXC for a good while.

The way I see it, if you cast aside for the moment all the enterprisey things like full-blown orchestration, sotfware-defined networking, monitoring and pretty UIs, there are two simple1 takes on containers - the sysadmin’s and the developer’s.

Boxes With Baggage

As a part-time systadmin, the first thing that comes to mind when people go on about the ease with which they can throw together an application stack using Docker (usually during fancy demos) is that grabbing random Docker images off the Internet isn’t really a good idea – putting aside the security implications (which ought to be enough), there’s usually a fair amount of bloat (oddball packages and binaries), uneven baseline images (different distributions and userland layouts) and lack of conventions for such things as mount points, configuration, etc.

So the only sane way to deal with that if you intend to run a business atop containers is, of course, building your own images (preferably taking the time and trouble to clean out any unused tools2) and maintaining your own repository.

LXC, on the other hand, tends to be fairly more standardized by default – in the sense that you usually build your containers from scratch, so you have to know what’s in there. It still sucks to have full copies of the Linux userland lying around, but there are advantages.

Containers as Machines

The problem, as I see it, is not about whether or not containers are useful. The real problem is managing them sanely, and a side problem I’ve been concerned with for the past couple of months is that provisioning and orchestration tools for containers are still a fair bit behind what you have for VMs/hypervisors. Most people right now are fairly used to Puppet, Chef, Ansible and whatnot (having spent the last 3-4 years getting to grip with them), and machines (virtual, contained or physical) are, by and large, fairly familiar beasts, with well-known internal locations for configuration scripts, data files and packages, as well as reasonably uniform system services.

If you’re already managing a mix of physical and virtual servers, simply tossing in Docker containers adds a lot of entropy to the mix, but using vanilla LXC lets you re-use the vast majority of your tools.

An LXC container host can get pretty crowded with largely redundant init, sshd and other housekeeping processes in each container, but (provided you’re running them on a back-end network, of course) you can simply fiddle with the network bridge to make your containers visible on the LAN, fool your existing tools into accepting each container as a standalone machine and not lose much sleep over it – up to a point, of course, since there are trade-offs in terms of IP addressing, security and complexity (but networking performance tends to be pretty good).

The downside, of course, is that LXC cannot cope with dynamic workloads, and instantiating new containers requires proper planning. Again, that might not be much of an issue if you’re running in-office/back-end services, but its predictability (and reasonably well-defined ways to set resource limits via lxc-cgroup) lends itself well to such things.

The Shell Game

Docker is a lot better suited for ephemeral containers, and (most importantly) for reproducible ones that you can clone at will.

I’ve tried running LXC atop btrfs to duplicate some of that snapshotting functionality . Filesystem vagaries aside, trivial cloning isn’t that hard to duplicate, but the simplicity with which you can get things up and running (yes, even for pretty demos) is an eye opener.

Isolation is also slightly better enforced (given that a Docker container is only supposed to run a single binary), and it’s usually less painful (and less troublesome) to restart a service container than a machine container.

The trouble with Docker, as far as I’m concerned, isn’t the lack of a “complete”, init-managed machine-like environment or having things run as root (which, by the way, is why you have the USER directive in Dockerfiles – use it!).

The thing that annoys me repeatedly is managing data persistence and connectivity. Docker volumes make it easier these days, but it can be a pain to (for instance) tweak things like PostgreSQL to get their data from different locations3 and toss a boatload of mountpoints and ports in front of a docker run command.

Some aspects of the internals aren’t much nicer. Filesystem redirection (either via AUFS or data volumes) can take its toll on performance, not to mention that it’s all too easy to end up with a host crammed full of unused AUFS layers if you (again) overuse third-party images.

More worryingly, networking (especially across different hosts) tends to be somewhat of an exercise in knitting ports together, which makes it hard to manage. I know there are a bunch of very smart people working on that, but so far I’m not too impressed by some of the inter-host networking solutions I’ve seen, given that they essentially employ userspace tunnelling and routing to build overlay networks – again, with considerable overhead in some cases.

Things have improved markedly in the past year, but any sysadmin will tell you they like their technologies to be stable and predictable.

Even if workloads and traffic surges aren’t, it’s always good to know that unleashing a dozen fresh containers to deal with peak traffic won’t slow down your IOPs to a point where any performance benefits you might have go right out the window – so, again, build your own containers and test them under load.

DevOps Nirvana

So far, this was mostly a sysadmin take on things. But if I put on my developer hat (which may or may not have “devops” scrawled on it with a crayon), things look amazing.

The big benefit for developers is complete reproducibility of their environment. That, thanks to container snapshots, includes airtight dependency management to a degree hitherto unseen, even if you (like most sane shops) insist on deploying stuff via a package manager to ensure uniformity.

Thinking of Docker containers as application images with bundled dependencies is the first step.

Realizing that you can rebuild them hundreds of times a day and deploy them very quickly is the next, altough, like many, I wish the Dockerfile format wasn’t so limited (I’d prefer something closer to a Makefile in terms of variable expansion, for starters).

From then on, splitting your application into multiple independent, individually testable and reusable services, each in its own container, is just gravy – all of a sudden, you gain the ability to selectively upgrade components (even fairly complex ones) on the fly.

The thing is, re-architecting any pre-existing application to work that way usually means a lot of work. But developing new ones can become quite fun, even if componentization (and containerization) isn’t a magic bullet – in fact, I tend to think that its eminent suitability for online services has blinded a lot of otherwise clear-minded folk into believing it’s useful for everything –but I digress.

The Simple Enterprisey Things

So I’ve been looking into Docker deployment solutions, of which there are entirely too many at this point.

That means looking for two completely different things: a full-featured solution that can be deployed wholesale as part of a PaaS4, and a no-frills approach for my own use.

Each has its own challenges – the former has to be enterprisey enough (and popular enough) for me not to spend weeks avoiding sharp corners, whereas the latter needs to be something simple and straightforward enough for me to run in an ad-hoc fashion.

Up until last week Deis seemed to be the one to aim for in the long run, but I’m not exactly pleased with their dependency on CoreOS, and after I’m going to wait until the dust settles5 – there’s no point in building stuff atop quicksand.

So I’ve started taking a solid look at Kubernetes (seeing that they’re a bit further along than the Docker folk themselves, at least for the moment), and I’m quite curious as to what the Mesosphere guys .

My Pocket PaaS

Picking my own solution is turning out to be a bit harder: Shipyard’s an old favorite of mine, but I wanted something between Octohost and Dokku.

In particular, I wish either of them used varnish as a front-end instead of nginx, and am not very keen on the way I have to deploy things – I personally find Heroku-like buildpacks a nuisance, and even considering I’ve had to try out a Cloud Foundry setup recently and they “just worked”, it’s an extra “fat” dependency to debug if things go horribly wrong.

So, true to , I’m currently using Fig for development and plain Docker (together with a few Makefiles and a private registry) for deployment. Using a private registry is still a major pain (you have to tag it with part of the registry URL, of all things), but it can be automated away somewhat.

But actually developing is turning out great. Fig has some extremely nice touches; for instance, it sets up environment variables and hostnames for me, making it a lot less time-consuming to build multi-service setups than . Given my (very) limited time for hobby coding these days, it’s been well worth the switch.

And it will underpin (or at least influence) Docker‘s own compose tool, so it seems like a safe enough investment.

Of course, the fun part of this is that it will all change in a month or so. Best laid plans…


  1. Some may find these simplistic, but I like my arguments (and problems) simple and manageable. ↩︎

  2. Even if AUFS (like btrfs) helps you save on disk space, the attack surface is still there. So I’ve been toying with the notion of grabbing something like the busybox base image or the Tiny Core Linux rootfs (which is only 3MB) and building a set of container runtimes for , and that have absolutely nothing installed, but laziness and the wealth of pre-built packages available in have so far prevented me from doing so. ↩︎

  3. PostgreSQL is actually fairly easy, provided you bite the bullet and put everything (configuration and data) in the same volume. But I hate that. ↩︎

  4. But please, let it not be OpenStack. I like OpenStack’s Nova, but it comes with too much VM-oriented baggage and the whole thing has turned out so big and enterprisey it’s become unwieldy for anything but enterprise deployments. ↩︎

  5. I’d much rather to stick to things that can be deployed atop either CentOS or , given that they’re supported on pretty much every private or public infrastructure provider out there and there are plenty of things one needs to run on a host system that are readily available on normal distributions. ↩︎