Diving into Inferno

I descended into it and came back up again, somewhat enlightened but none the worse for wear.

It all started some three months ago, when I was mulling the current state of affairs regarding distributed computing.

We run Hadoop for a number of things (and the veritable zoo that comes with it), but there’s been a not-so-subtle shift in the Big Data tides towards faster, tentatively leaner (but ultimately meaner) approaches like Spark, Storm, Hazelcast, etc., so I kept asking myself what else might be out there.

This because (let’s face it) Hadoop is a humongous beast. It’s the “industry standard” in a field that is already hyped to the size of a whole herd of industries, where there isn’t (nor will there ever likely be) a one-size-fits-all solution.

So, being me, I went back to fundamentals and started digging around for OS-level support for heterogeneous distributed computing. MPI is now baked into the kernel, but it’s fairly low-level. I went looking instead for something that provided consistent abstractions regarding the file system, runtime environment, etc.

That eventually led to and Inferno1, which is (surprisingly) still being maintained. So I spent a fair bit of the past three months reading everything I could find about both, quickly gravitating toward Inferno due to a couple of neat twists it has.

The first is that it can run both natively and in a “hosted” mode (running as an app atop pretty much every modern operating system), which means you can explore its finer points without undue hassle, in an application level sandbox.

The second twist (which just happens to make the first eminently viable) is that most of it is composed of platform-independent executables that run inside a (pretty fast) register-based virtual machine called dis.

dis hasn’t had the same degree of fine-tuning as today’s VMs, but it seems reasonably efficient, and binaries compiled on one architecture run on any Inferno environment – it’s completely transparent, and I was able to tinker merrily across a couple of ARM machines2 (running ) and a .

I won’t bore you with the details here, but the ground rules are simple, yet powerful – everything is a file, which means you can, for example, pipe the output from a shell script directly to the control file of a graphical application on another machine, which is a common trick for those using the Acme editor (more on that in a bit).

As such, it has its own (arguably better) abstractions as to what consists of a remote session (in Inferno, you can pipe the entirety of a graphical session – which is fundamentally the same as a single app’s equivalent of stdio/stdout/stderr – over a 9P socket to another host).

And, of course, there’s a fundamental break with deeply ingrained paradigms:

  • Support for “normal” terminals is non-existent, as are most of the usual CLI niceties like navigating your command history with your cursor keys. I had to use rlwrap to provide me with readline support and run Inferno’s emu that way.
  • Editing files without a graphical environment is nigh on impossible (I ended up editing them outside the Inferno VM).
  • Editing them inside a graphical environment is a challenge – there’s Acme (which is worth trying, to a point) and a simple “normal” editor, but, out of the box, nothing else you can use for development (there is a vi port, but it’s neither here nor there).
  • The graphical environment uses Tk, and is, erm… aesthetically challenged – usable, but utterly, utterly hideous. Ultimately, a throwback to the Dark Ages of Motif.

All of these are reasonable caveats considering that intent was to build a better, more consistent , but, in practice, make it hard to get to grips with Inferno after decades of variants.

Toss in the lack of a modern browser (or an easy way to use one alongside it) or common language runtimes (there’s a little Scheme interpreter I found interesting, but nothing that generates dis bytecode), and doing anything useful (or even familiar) with the system is an uphill struggle at first.

Not because it’s hard to understand (I read through the complete set of docs and was enthralled at the simplicity of the internals as well as the cleanliness of the overall design), but because it lacks all the amenities that came with the last decade of operating systems.

Take, for example, the (arguably simplest) form of tool, the text editor. I was particularly impressed with Acme (which a number of people swear by), not because of its feature set (it prides itself on having as few as possible, actually), but due to the way you can do literally everything with it, somewhat like the ancient MPW shell that was my first introduction to development.

It’s hard to describe, really, you just have to try it – and stumble painfully onto the way Acme relies on you having a three-button mouse to do, well, just about anything. It’s mesmerizing to watch, even (I daresay) intuitive, but almost impossible to use on a modern laptop3 without dollops of willpower.

Mind you, I used a three-button-mouse for a long time. In fact, I used the Hawley wheel mice, and you’ll be hard pressed to beat those (and the UIs that I had to use them against – against being the operative word here). But it’s just not feasible to keep doing that anymore.

So no, the Inferno GUI hasn’t aged well in this age of touch screens (and touchpads). Like other remnants or ports of to modern systems, it needs a major facelift to be appealing outside its niche at face value.

But if you dig deep enough, there’s a surprising (but still niche) amount of development going on around Inferno. For instance, I came across a proof-of-concept interpreter in dis that was part of Google’s Summer of Code 2013, and the same guy who did that also built a distributed filesystem layer.

Furthermore, searching for styx or 9P yields an amazing number of implementations of the core network protocol in various languages (like, say, this one), which is interesting enough on its own.

And the more I dug around, the more interesting stuff I found about Inferno and .

For instance, if you want to understand where came from (to a degree), you should check out Limbo in depth – which I did for a while, poring over the Programming Inferno with Limbo book and a nice (if by now somewhat stale) inferno programmer’s notebook. Not your usual weekend reading, I know, but extremely educational.

Did I ever find a feasible solution for lightweight, distributed heterogeneous computing? Well, not really. Inferno is tantalizingly close to what I wanted in terms of design and architecture, but it’s not all there – It’s stupendously impressive to be able to do map/reduce across a number of machines with little more than its shell scripting language (plenty of hints abound), but it doesn’t tie into any of the current development/platform ecosystems, so it’s hard to justify using it despite its beautiful design4.

But reading about it and trying it will, like most well-designed things, expand your mind – which is always a good thing.


  1. There’s also, interestingly, enough, a Python package called Inferno that runs atop the Disco Map-Reduce framework – I’m irrationally fond of Disco and would deploy it instead of Hadoop if people weren’t already brain-washed into accepting it as the One True Solution, but that’s another matter entirely. ↩︎

  2. Serendipitously, this weekend Lynxlabs released a beta image of Inferno for the , which (sort of) works already and that I’ve been . I don’t expect to replace my current installs (hosted atop ) on my little cluster just yet, but it’s worth keeping an eye on it. ↩︎

  3. Incidentally, the Mac version I used is a full-blown Inferno environment that just happens to boot straight into Acme, so you get the whole thing neatly bundled in, with all the shell niceties and most of the tools. ↩︎

  4. I’m using with Hazelcast, which has a few missing pieces as far as “normal” job tracking is concerned but which is considerable fun to tinker with, even on relatively underpowered hardware↩︎

This page is referenced in: