I’ve been meaning to redesign the site and move it to new infrastructure for a long while now – well over a year, in fact – and finally decided to go about it in earnest.
As usual, this is happening amidst a kind of perfect storm: I have a bunch of other projects I need to work on, one of my kids has been sick all weekend, my legendary allergies are kicking in again and today’s a holiday, so it was now… or never.
The Looks
I decided I liked the plain, unfettered serif look well enough to keep, at least for now. There are some Medium-y things happening with the design (because I actually like the Medium UX), but the site is now designed to look better on tablets first – partly because I practically live inside an iPad these days, and partly because that’s been the fastest growing segment in Google Analytics for the past couple of years.
Rather than completely roll my own design from scratch, I picked up Lanyon and went to town on it. There’s still a fair amount of CSS optimization to be done, but it works.
I decided to buck a lot of trends, at least for now: no fancy fonts (Georgia does the job more than adequately), no JavaScript unless it’s absolutely necessary, no fancy images (at least for now – I fully intend to reverse direction on that topic), and no static HTML.
The Process
The big change is how I deploy the site – everything’s now just a git push
away. That may be the norm in 2016, but the previous codebase lasted me for eight years, and way back then we only had ones and zeros. Or blunt instruments, or something.
I still use Dropbox to manage content (pointing any iPad editor to it is trivial these days), but what was getting to me was how hard it was to roll out new code, layouts, etc. After all, I did that sort of thing all the time for work, so why couldn’t I do it better for my own stuff?
I originally intended to use Docker for everything and had a staging server where I was using dokku to manage deployments, but after nearly a year of futzing with it I was spending more time cleaning out unused containers and upgrading various bits than actually running stuff.
Docker is essential when you’re deploying at scale, but for a single, relatively low power box it’s just plain overkill – and I’ve yet to be able to run it without any hitches on ARM hardware.
So, as usual, I built something smaller – I took dokku, reimplemented only the bits I needed in way less than a thousand lines of Python (800 or so right now), and piku
was born.
As far as having your own mini-Heroku, piku
is the simplest thing that could possibly work, and in the venerable tradition of my developing everything on low-end boxes first and then moving up in the world, it was developed and field tested on a 512MB Raspberry Pi model B (the ancient, pokey kind).
The workflow is exactly the same as dokku’s but without the container build times, so that was an instant win.
The Tech
Sushy has been “nearly ready” for a few months now, which essentially meant that there were a bunch of things that hadn’t been battle-tested yet. Rather than fixing absolutely everything and putting off migration until the cows came home, I decided to move fast and break a few things in the process.
So there are things that don’t work right now. In particular, some ancient stuff will not be completely accessible until I get around to backporting a few Yaki plugins, but most content from the past ten years should be readable.
Best of all, this now indexes instantly, and the entire codebase is so much smaller when compared to the old one that it isn’t even funny.
Deploying Sushy to piku
essentially means that when I do a git push
, piku
creates (or updates) a Python virtualenv
for it. Then uWSGI is given a set of files to manage both web and background workers (generated from a Procfile
and a set of environment variables, in true 12 Factor style) and nginx gets automatically reloaded to talk to any new web processes, all thanks to the magic of inotify
.
Using uWSGI to supervise background workers was the one trick that made things a lot simpler, and piku
has become my de facto way of deploying Python services, whether or not they even have a web interface.
The Hardware
In a nutshell, I decided to move all my personal stuff to DigitalOcean and phase out Linode by the end of this month.
This was mostly circumstantial, but it also reduces overhead. I have entirely too many machines running all over the place, so cutting down on the number of providers I use makes it all a bit easier. Even before I started working on Azure, most of my development stuff had progressively shifted over from Linode to Digital Ocean, leaving only this site on Linode.
I have nothing against their service, but my 2GB, multi-core machine was mostly sitting there burning cash, whereas Digital Ocean just happened to have Ubuntu 16.04 (Xenial) readily available, and a 512MB droplet is more than enough these days thanks to the wonders of CDNs – so I asked myself “why not?” and threw the switch.
Things should be increasingly stable over the next day or so as DNS propagates and I set up redirects, so kindly bear with me and feel free to tweet @taoofmac with any issues you find.