Fee-fi-fo-fum, I smell FaaS

I’ve been fighting against my usual reticence towards using NodeJS by exploring the intersection between “serverless” (you might remember on that topic) and my legendary lack of time. Since most of the emerging indie FaaS runtimes don’t support or in a relatively painless way, I grudgingly started bringing myself up-to-date on server-side and working through the various idioms, and have achieved an uneasy peace with ECMAScript 7 (which, alas, I can’t really use with many libraries yet).

But as I started playing around with a couple of those FaaS runtimes, I quickly realized that they just don’t have the feature set required to build anything I’d call production-ready. And by that I mean flexible authentication support, stable workflow, working monitoring, run-time debugging, and even basic security principles like being able to host their UI separately from the engine itself (which is the hallmark of the “there’s no place like 127.0.0.1” crowd), so I started using Azure Functions and figuring out how to get it to do what I wanted.

Fast forward a month and a half, and I’ve probably built around twenty microservices atop Azure Functions (which is not bad considering the amount of time I don’t have for doing project work these days). Halfway through I even had to do a short tutorial on those, out of which came a little test project that I now use as scaffolding, and that has already proven itselfy useful to put together a PoC where we used it to kickstart things.

Update: I’ve since gone and built an Azure Resource Manager template to deploy my typical setup for this kind of thing (which includes a small Redis instance for sharing state among functions and Application Insights metrics), so you can get up to speed quite quickly if you combine both projects.

Besides Azure Functions, I’m also playing around with AWS Lambda and Google Cloud Functions1, partly to keep abreast of competitive updates and partly because they’re both worth getting to grips with, I’m especially partial to the latter because it’s a closer match to what I have in Azure, but regardless which one you choose, the overall workflow is pretty much the same:

  • Provision the service (define performance characteristics, endpoint, runtimes, etc.),
  • Set up deployment (usually via git, although AWS still makes it unusually hard for folk to get started)
  • Add dependencies by dropping a manifest/config of some kind alongside your package.json and your index.js
  • Write code, deploy, rinse, repeat.

Seems like a pretty effortless way to get something running, right?

Well, to begin with, sure. But once you wander off Hello World territory, things get complex pretty quickly.

Stringing Functions Together

Regardless of NodeJS peculiarities, the reality is that a single function, on its own, is borderline useless. Regardless of how much logic it contains (and you certainly don’t want to cram too much into it, since there are time and resource constraints), you invariably end up needing to do more than just one and have them co-ordinate to achieve something substantial.

Furthermore, if you’re designing re-usable services, there will be a fair amount of harmonization (yes, that means boilerplate) and much toing and froing around return values, atomicity and state management (which becomes all the more important when you start thinking about scaling out and executing many tasks in parallel).

So the sane way to go about things is take a page from the domain-driven design books and build multiple small functions that communicate via message passing while doing one thing well.

In Azure, I’m rather partial to using Storage Queues for communicating between functions, both because they’re very simple (the bindings are all simple, actually, but the queues themselves are the simplest thing that will possibly work). You’re effectively capped off at around 2K messages/second (depending on the storage you’re using), but that is plenty for small apps and (this is a point that lots of people tend to ignore) it’s still slow enough to debug, and you can inspect storage queues directly with the storage explorer.

However, that is plenty enough for me to fetch, process and classify (using the cognitive services APIs) a few thousand RSS feed items in under a minute, and at effectively zero cost (at the expense of speed). I’m not yet happy with the results (largely because I’ve fallen victim of NodeJS’s callback hell and am having trouble with some of the logic I need to implement), but I’m very happy with the platform for this kind of thing.

Also, all I needed was a text editor and git – no IDE or Windows machine required.

Caveats

There are a few things I don’t like in the FaaS approach (well, besides the overwhelming mindshare appears to have as lingua franca for it), namely the difficulty to do long-running processes (some things do need warmup times, or benefit from handling multiple requests in context, or simply take too long), and the lack of a real console for debugging things (Azure Functions has an in-browser console, but I like my debugging sessions inside a terminal and with infinite scrollback).

Another thing is that you really have to debug against live cloud services. Emulators are fine if your functions are self-contained, but in practice nothing is self-contained, and you really need to set up a (scaled down) duplicate of the architecture you need to support your functions (i.e., queues, Redis, databases, etc.). Most services have free tiers that are plenty good enough, but sometimes functionality depends on tier…

Also, you usually need to drop an API management layer in front of anything you want to take into production. Azure Functions gives you enough basic functionality to ensure you can have sane endpoint schemas right off the bat, which is another plus, but ultimately (and especially if you’re exposing those endpoints to internal or external customers) you need something to do the API equivalent of adult supervision.

But overall, I like the fine granularity it provides in terms of both resource allocation and functional scoping (no pun intended). It might make it harder to estimate costs when you’re doing capacity planning, but it is so cheap for some things that it’s a no-brainer.

And it’s so much fun that I’ve already been trying my hand at building another indie FaaS thing – but one that runs , of course.


  1. I re-activated my accounts on both a while back, initially to gauge how far Azure had progressed over the past year in general, and now to compare them firsthand across a few selected services. It’s always fun checking out different solutions for the same problems, and cloud is currently the biggest challenge out there… ↩︎