Some Breakage May Ensue


I’ve been meaning to redesign the site and move it to new infrastructure for a long while now – well over a year, in fact – and finally decided to go about it in earnest.

As usual, this is happening amidst a kind of perfect storm: I have a bunch of other projects I need to work on, one of my kids has been sick all weekend, my legendary allergies are kicking in again and today’s a holiday, so it was now… or never.

The Looks

I decided I liked the plain, unfettered serif look well enough to keep, at least for now. There are some Medium-y things happening with the design (because I actually like the Medium UX), but the site is now designed to look better on tablets first – partly because I practically live inside an iPad these days, and partly because that’s been the fastest growing segment in Google Analytics for the past couple of years.

Rather than completely roll my own design from scratch, I picked up Lanyon and went to town on it. There’s still a fair amount of CSS optimization to be done, but it works.

I decided to buck a lot of trends, at least for now: no fancy fonts (Georgia does the job more than adequately), no JavaScript unless it’s absolutely necessary, no fancy images (at least for now – I fully intend to reverse direction on that topic), and no static HTML.

The Process

The big change is how I deploy the site – everything’s now just a git push away. That may be the norm in 2016, but the previous codebase lasted me for eight years, and way back then we only had ones and zeros. Or blunt instruments, or something.

I still use Dropbox to manage content (pointing any iPad editor to it is trivial these days), but what was getting to me was how hard it was to roll out new code, layouts, etc. After all, I did that sort of thing all the time for work, so why couldn’t I do it better for my own stuff?

I originally intended to use Docker for everything and had a staging server where I was using dokku to manage deployments, but after nearly a year of futzing with it I was spending more time cleaning out unused containers and upgrading various bits than actually running stuff.

Docker is essential when you’re deploying at scale, but for a single, relatively low power box it’s just plain overkill – and I’ve yet to be able to run it without any hitches on ARM hardware.

So, as usual, I built something smaller – I took dokku, reimplemented only the bits I needed in way less than a thousand lines of Python (800 or so right now), and piku was born.

As far as having your own mini-Heroku, piku is the simplest thing that could possibly work, and in the venerable tradition of my developing everything on low-end boxes first and then moving up in the world, it was developed and field tested on a 512MB Raspberry Pi model B (the ancient, pokey kind).

The workflow is exactly the same as dokku‘s but without the container build times, so that was an instant win.

The Tech

Sushy has been “nearly ready” for a few months now, which essentially meant that there were a bunch of things that hadn’t been battle-tested yet. Rather than fixing absolutely everything and putting off migration until the cows came home, I decided to move fast and break a few things in the process.

So there are things that don’t work right now. In particular, some ancient stuff will not be completely accessible until I get around to backporting a few Yaki plugins, but most content from the past ten years should be readable.

Best of all, this now indexes instantly, and the entire codebase is so much smaller when compared to the old one that it isn’t even funny.

Deploying Sushy to piku essentially means that when I do a git push, piku creates (or updates) a Python virtualenv for it. Then uWSGI is given a set of files to manage both web and background workers (generated from a Procfile and a set of environment variables, in true 12 Factor style) and nginx gets automatically reloaded to talk to any new web processes, all thanks to the magic of inotify.

Using uWSGI to supervise background workers was the one trick that made things a lot simpler, and piku has become my de facto way of deploying Python services, whether or not they even have a web interface.

The Hardware

In a nutshell, I decided to move all my personal stuff to DigitalOcean and phase out Linode by the end of this month.

This was mostly circumstantial, but it also reduces overhead. I have entirely too many machines running all over the place, so cutting down on the number of providers I use makes it all a bit easier. Even before I started working on Azure, most of my development stuff had progressively shifted over from Linode to Digital Ocean, leaving only this site on Linode.

I have nothing against their service, but my 2GB, multi-core machine was mostly sitting there burning cash, whereas Digital Ocean just happened to have Ubuntu 16.04 (Xenial) readily available, and a 512MB droplet is more than enough these days thanks to the wonders of CDNs – so I asked myself “why not?” and threw the switch.

Things should be increasingly stable over the next day or so as DNS propagates and I set up redirects, so kindly bear with me and feel free to tweet @taoofmac with any issues you find.

Bright Pixel is born 


One Googler’s take on managing your time 


The Curious Case of The Craving For Elbow Grease


It’s now been six months since I joined Microsoft, and it’s a good time as any to jot down a few notes on what it’s been like so far, from a very generic (and, above all, professional perspective).

I also slipped on wet cobblestones a couple of days ago and bruised my left elbow and most of my lower arm in quite spectacular fashion, so I’m taking advantage of that to chill out and reflect a bit.

The Good

Being part of what is known internally as Cloud and Enterprise makes for some sobering insights on how the company is changing – which is a good deal of the reason why I signed up in the first place. I missed that kind of perspective (it was the sort of industry gestalt I enjoyed daily at Vodafone), and there’s the same feeling of disconnect and incredulity when I peruse what passes for news coverage these days and attempt to match what is newsworthy with what I know is actually happening.

That’s the tech industry for you – a lot of work backstage and a lot of static and guesswork in the media, a lot of it coming from biased pundits.

On the whole, there’s a lot of soundness in Microsoft’s cloud strategy as I see it being outlined and communicated to its traditional customers. Hybrid cloud is a nice, logical first step for established businesses, and the competition can’t match Microsoft’s reach in that regard. So the business logic is sound, although I will freely admit that it may seem like it’s coming out of left field to someone who’s spent their whole life living in an Internet-centric world.

Then again, traditional IT isn’t that big of a match with your typical startup environment. Well, not yet anyway…

The people I work with are, in a word, awesome, both locally and globally. Again, I enjoy the global reach: E-mailing or IMing just about anyone on any topic and getting hold of the actual people driving a product is… interesting, to say the least, and something that I find heartening considering that Portugal keeps teetering on the brink of becoming a technological backwater – there’s a whole world out there filled with educated, highly knowledgeable people, and you don’t have to starve intellectually by keeping to the local industry.

(Yes, the Web Summit is coming. No, it won’t fix our economy or our startup ecosystem as if by magic – we’ll actually have to work at it, but I’ll write about that some other time.)

The Bad

In contrast, being in a sales-driven part of the business (although my role is technical in nature and focus) makes it hard to do mid-to-long-term projects, something that is constantly eating at me.

I’m fortunate enough to have customers who get it and are leveraging cloud solutions in earnest, and there are plenty of stimulating discussions and workshops to go around, but the enjoyment I derive from doing architecture designs and proof of concept demos/prototypes etc. is not the same as actually hunkering down and building stuff while leading a team.

I’m still trying to adjust, but not actually working on projects in a continued fashion is seriously getting to me – I’m too used to obsessing over a challenge over several days from breakfast to supper (and often the other way around), and there are just too many context switches for me to feel productive.

The Ugly

I get to move around a lot (or, at least, a lot more than I used to). That would ordinarily be a good thing (and I never get bored of going places or meeting new people), but despite flexible hours and being able to work on the go, the overall feeling is of wasted time, for two reasons (one of which is largely psychological):

  1. No matter how much I enjoy doing talks and presentations, it just doesn’t feel like actual work (much like when you start leading teams and doing status meetings)
  2. The commute to and from the office takes me an hour or so each way plus anywhere between half and another hour to go from there to a customer

…and those completely destroy any sort of serious productivity (there’s just no way you can keep “in the zone” unless you lock yourself away for a whole day).

Also, in practice, meal times are a bit random and all the extra walking I was supposed to be doing just isn’t happening (or at least isn’t happening as consistently as necessary), so I’m gaining weight again.

Mixed Blessings

Fortunately, I can work from home (or, in fact, from anywhere) – which given my current location means I can hop over to do a presentation to a customer in only half an hour, possibly sticking around for a couple of hours to catch up on e-mail, tying up all sort of loose ends (maybe even doing some impromptu whiteboard sessions) and be on my way to the next meeting.

This, of course, means it’s also harder to separate personal from work time – I do all the usual tricks like stepping into my home office at 9 and leaving it by dinner time as well as not using my work laptop on weekends or evenings unless it’s absolutely necessary, but I always end up reviewing docs and doing deep dives on tech stuff during the evening (//build being a particularly good example – I’m still halfway through the sessions, at best).

It’s either that or losing touch with the status quo of the umpteen technologies I have to deal with – something that is very hard to get across when most “normal” people get by with a couple of summary slide decks.

Oddities

I decided to get into modern C# and ASP.NET, and am constantly amazed at how much the average .NET dev relies on the IDE for everything. Coming from a world where scripting, automated testing and (above all) thinking before doing stuff makes you much less reliant on graphical tooling, I’m constantly coming across weird corner cases and inconsistencies that the IDEs simply smooth over – and the same applies to database management tools and suchlike.

On the other hand, that deeply ingrained habit of not relying on graphical tools has already saved my bacon a couple of dozen times, so I’m (almost) OK with using a CLI on Windows, even if it entails using PowerShell – which is less of an issue for me every day1 thanks to the increasing emphasis on things like the Windows Subsystem for Linux (which is getting better and better) and my having taken up the challenge of being one of the local Open Source Champions, but that’s a story for another time.

The Gear

Most of my work (and play) still revolves around terminal windows and browsers, which work well mostly anywhere.

Still, at this point I would gladly swap out my ThinkPad for something smaller, since everything else I use is vastly easier to carry – all of my personal coding is done on a 12” MacBook and all of my writing is still done on an iPad mini (although I’ve shied away from OneNote recently due to an irritating bug in the iOS version).

But for work, the only relevant change is that I have temporarily ditched the company-issue Lumia 640 XL in favor of my ancient HTC One, because the One is a kick-ass LTE modem and I actually need that a lot more on the go than the Lumia’s camera.

After all, you can’t have everything, right?

(Or so I keep telling myself)


  1. Incidentally, if you want to understand why PowerShell works the way it does and where Windows Server is going, I heartily recommend watching this presentation by Jeffrey Snover. I still don’t like the syntax or the aggravating inconsistencies between different sets of cmdlets, but I have a lot more respect for it these days. ↩︎

Dave Cutler’s five-decade-long quest for quality 


Why Microsoft needed to make Windows run Linux software 


The iPhone SE 


More Chinese Mobile UI Trends 


Xamarin for Everyone 


Ubuntu on Windows - The Ubuntu Userspace for Windows Developers 


Python, Machine Learning, and Language Wars 


Micro: A microservice toolkit 


iPad Pro 


Setting Up Anaconda Python WSGI Apps On IIS


After unconsciously avoiding IIS 8.5 during nearly six months at Microsoft, I had to deploy a few simple Python apps on Windows Server. Nothing against it, really, but I’m much more focused on microservices these days, and as such the only Windows web server I play with is Kestrel, given that it’s the future for ASP.NET.

But since I had to use IIS, I went about it in my usual pragmatic fashion – i.e., made absolutely sure it would be as painless as possible in the future. In the process, I also partially duplicated the setup used by Azure App Service Web Apps, since I wanted to be able to deploy on both with minimal changes.

My Setup

Most of the work I do on Azure these days involves Big Data or data science of some sort (usually with a machine learning twist), and as such I tend to use the Data Science Virtual Machine image.

That image has a bunch of essential goodies installed, beginning with Continuum Analytics’ Anaconda Python distribution, the latest R Server, and other staple Microsoft tools like Visual Studio and SQL Server Express – in short, all you need to start munging data at your leisure, including Jupyter and Power BI for trying out scripts interactively and making sense of data during the whole project.

I’ve taken to installing the R kernel into Jupyter, so I can do just about everything using notebooks and share the results easily – but that’s a topic for another post.

The Problem

I wanted to expose a number of REST endpoints to publish data from a model I’m developing, and wanted to test them locally using the Anaconda runtime before bundling up the lot and pushing them out to an Azure Web App.

I’d rather use Anaconda because it’s already installed, and also because I’ve had mixed results with the Python for Windows binaries from python.org over the years – nothing much, really, but installing another interpreter seemed like an unnecessary hassle.

Azure App Service leverages the web.config file and a custom WSGI handler to make it easy to set up Python apps with minimal fuss (including activating a virtualenv to allow for custom packages), but I wanted to have as close a setup as possible using Anaconda’s Python interpreter and packages.

Execution Environment

Here’s how things work in this setup, which emulates what Azure does for Python apps:

  • web.config sets up IIS to use wfastcgi as a FastCGI request handler (with a rewrite rule to make everything go through FastCGI) and defines WSGI_HANDLER, PYTHONPATH and a number of other environment variables, including WSGI_ALT_VIRTUALENV_HANDLER
  • wfastcgi then loads and looks up WSGI_HANDLER, which points to ptvs_virtualenv_proxy.py
  • That proxy sets up WSGI logging, boots your virtualenv and then loads up WSGI_ALT_VIRTUALENV_HANDLER, which is actually your app

In this case, there’s no need for the runtime.txt file that Azure uses to determine the runtime, but it’s worth pointing out it exists.

To make this work locally, all you need to do is install wfastcgi inside Anaconda and pick your WSGI framework (I went with Bottle, which is what I use for most stuff, but using Django instead is child’s play).

There’s a hidden kink, though, which is that I had to install the rewrite module manually. That had me stumped for a while, since I stopped using IIS since 6.0(ish).

The Solution

To set everything up from scratch on a fresh machine:

  • Set up IIS as usual, enabling CGI (to get the FastCGI adapter)
  • Install the rewrite module
  • Enable web.config overrides (I edited %windir%\System32\inetsrv\config\applicationHost.config “the hard way” using Notepad with admin privileges, but you can use the appcmd.exe CLI tool)
  • Drop my web.config, myapp.py and a tweaked version of pvts_virtualenv_proxy.py into the default website (I actually nuked the default while experimenting, but I’m already running two different sites on the same box with this setup)
  • Change permissions on the log folder to enable SERVERNAME\IIS_IUSRS to write to it
  • Run pip install bottle wfastcgi
  • Run wfastcgi-enable as an administrator

…and bingo, the little Bottle app should just work.

Configuration Files

Here’s the web.config file. Note the logging and WSGI handler paths:

<?xml version="1.0"?>
<configuration>
  <appSettings>
    <add key="WSGI_HANDLER" value="ptvs_virtualenv_proxy.handler"/>
    <!-- Make sure HOSTNAME\IIS_IUSRS can write to this -->
    <add key="WSGI_LOG" value="c:\inetpub\logs\logfiles\w3svc1\wsgi.txt"/>
    <add key="PYTHONPATH" value="c:\inetpub\pyroot\default" />
    <add key="WSGI_ALT_VIRTUALENV_HANDLER" value="myapp.app" />
    <add key="WSGI_ALT_VIRTUALENV_ACTIVATE_THIS" value="c:\inetpub\pyroot\default\env\Scripts\activate_this.py" />
  </appSettings>
  <system.web>
    <compilation debug="true" targetFramework="4.0" />
  </system.web>
  <system.webServer>
    <modules runAllManagedModulesForAllRequests="true" />
    <handlers>
      <add name="Python FastCGI"
           path="handler.fcgi"
           verb="*"
           modules="FastCgiModule"
           scriptProcessor="c:\Anaconda\python.exe|c:\Anaconda\Lib\site-packages\wfastcgi.pyc"
           resourceType="Unspecified"
           requireAccess="Script" />
    </handlers>
    <!-- 
        this requires the rewrite module, available at http://www.iis.net/learn/extensions/url-rewrite-module/using-the-url-rewrite-module
        and tweaking C:\Windows\System32\inetsrv\config\applicationHost.config 
        
        %windir%\system32\inetsrv\appcmd.exe unlock config -section:system.webServer/handlers
        %windir%\system32\inetsrv\appcmd.exe unlock config -section:system.webServer/modules
     -->
    <rewrite>
      <rules>
        <rule name="Configure Python" stopProcessing="true">
          <match url="(.*)" ignoreCase="false" />
          <action type="Rewrite" url="handler.fcgi/{R:1}" appendQueryString="true" />
        </rule>
      </rules>
    </rewrite>
  </system.webServer>
</configuration>

Here’s the slightly tweaked ptvs_virtualenv_proxy.py. The virtualenv bit is commented out for this minimal setup, but it’s a simple matter to re-enable.

import os
import datetime

def log(txt):
    """Logs fatal errors to a log file if WSGI_LOG env var is defined"""
    log_file = os.environ.get('WSGI_LOG')
    if log_file:
        f = file(log_file, 'a+')
        try:
            f.write(str(datetime.datetime.now()))
            f.write(': ')
            f.write(txt)
        finally:
          f.close()

def get_wsgi_handler(handler_name):
      if not handler_name:
          raise Exception('WSGI_ALT_VIRTUALENV_HANDLER env var must be set')
    
      module, _, callable = handler_name.rpartition('.')
      if not module:
          raise Exception('WSGI_ALT_VIRTUALENV_HANDLER must be set to module_name.wsgi_handler, got %s' % handler_name)
    
      if isinstance(callable, unicode):
          callable = callable.encode('ascii')

      if callable.endswith('()'):
          callable = callable.rstrip('()')
          handler = getattr(__import__(module, fromlist=[callable]), callable)()
      else:
          handler = getattr(__import__(module, fromlist=[callable]), callable)
    
      if handler is None:
          raise Exception('WSGI_ALT_VIRTUALENV_HANDLER "' + handler_name + '" was set to None')
            
      return handler
      
# Uncomment when virtualenv is required
#activate_this = os.getenv('WSGI_ALT_VIRTUALENV_ACTIVATE_THIS')
#if activate_this is None:
#    raise Exception('WSGI_ALT_VIRTUALENV_ACTIVATE_THIS is not set')
#log('doing activation' + '\n')
#execfile(activate_this, dict(__file__=activate_this))

log('getting handler ' + os.getenv('WSGI_ALT_VIRTUALENV_HANDLER') + '\n')
handler = get_wsgi_handler(os.getenv('WSGI_ALT_VIRTUALENV_HANDLER'))
log('got handler ' + repr(handler))

…and, finally, the Bottle app:

import bottle
from bottle import route

@route('/')
def index():                                                                    
     return "Hello World!"

app = bottle.app()

To deploy the same thing on Azure, all you’ll need to do is strip out the modified web.config and virtualenv proxy, set up a git remote to your Azure Web App, and push away.

I’ll make sure to update this when I build something complex enough to warrant a virtualenv – If you’ve been keeping track, I am rather picky about how I manage dependencies, regardless of platform, but I’m quite partial to the new wheel format, and that’s probably the most sensible way to deploy packages with native bindings on Windows.

Django Channels 


ScummVM sails onto the Raspberry Pi 


Cursory reviews of random gadgets


It’s been a long while since I published any sort of gadget reviews, so here’s a couple at once.

Apple Watch (Sport)

Bought it at $100 off at a Best Buy during my recent foray to the US, the reasoning being that even if a new model comes out this month, getting a fully integrated experience at that price was a no-brainer. I expect to write more about my experience with it in six months or so (maybe – there are already too many opinions on it out there), but these are the highlights so far:

Pluses:

  • I can finally reply to instant messages (including WhatsApp and any other app that supports iOS quick replies) and even e-mails – the latter is hardly useful, but the former is invaluable when you spend as much time in public transit as I do. Canned replies are context-sensitive (which is pretty great), and dictation mostly just works (including punctuation), so it’s an all-round win.
  • The most important work apps I use work perfectly with it. Outlook and OneNote are not just accessible, but actually useful. Besides having the Outlook complication1 on my watch face, focused inbox notifications let me archive or triage direct correspondence on the go, including meeting requests (of which I receive entirely too many).
  • The health/activity stuff is actually useful to me, for a variety of reasons. Working at Microsoft forces me to move around quite a lot, but not actually exercise the way I would prefer, so having something better than Google Fit to keep (moderately accurate) track of my activity is nice.
  • The watch itself fits and feels great (I went for the 38mm model, which was both smaller and cheaper, and quite like not having a big, bulky watch protruding from my shirt cuff). Physical controls (crown and side button) are a nice bonus. Not by any means revolutionary and a bit fiddly in terms of UX, but nearly as satisfactory as the side buttons on the Pebble in a purely physical sense.
  • It works fairly well over Wi-Fi - I can leave my iPhone charging on the other end of the house and still get notifications, use apps, and even take the occasional call.

Minuses:

  • After nearly a year with smart watches that had an always-on display of some kind, it’s somewhat weird to have to jiggle my wrist or tap the watch to check the time.
  • Battery life on the watch might cover a weekend, if I’m lucky (then again, at the cost of a Pebble Round, I’m getting a lot more bang for my buck). Battery life on the phone seems to take a bigger hit than with either Android Wear or the Pebble (then again, might be due to initial enthusiasm).
  • The Taptic Engine is mildly overrated. It’s failed to bring to my attention a few important notifications, so I had to boost the intensity.
  • Notification handling is… Strange. Force touch to clear all is somewhat intuitive, but the amount of visual fluff and animations involved in taking stock and reacting to a notification make the whole experience feel a bit clumsy.
  • With the exception of 1Password, Google Maps and the Microsoft apps, none of the other stuff I have installed is worth writing home about. Citymapper, Moovit and a few other apps I use daily on the phone were nothing but slow, pokey disappointments.

Like other first-generation Apple gear, the watch (even running the 2.1 OS) feels unfinished and clunky. It is light-years ahead of the Pebble and a nicer experience than Android Wear, but not by much – I don’t regret getting it (again, being able to reply to messages on the go is well worth it on its own).

Logi(tech) Keys-To-Go

Another recent acquisition was a Logitech Keys-To-Go Bluetooth keyboard, which I got because for all intents and purposes my iPad mini is my go-to personal computer, and Apple saw it fit to make the mini 4 ever so slightly bigger than the previous models – so my old folio keyboard cover became useless overnight.

Mind you, I wanted a US keyboard layout (for coding), so I have no clue as to what regional variants there may be.

Pluses:

  • I can touch type on it just fine. Key nubs are more than adequately spaced – it was just a matter of setting my fingers on the home keys and I was set.
  • The lack of discrete keys is hardly a problem – key travel is easily on a par with my MacBook (probably a bit better), and the velvety feel of the material is a tad more pleasant than cold plastic keys in current weather.
  • It has a Ctrl key, something that is essential to me when using any sort of terminal/Remote Desktop app.
  • The keyboard is extremely lightweight, and only a few centimeters wider than my iPad mini 4, so it’s not very awkward to carry around (I suspect it will be even less awkward if you have a full-sized iPad).
  • It exhibits very little of the usual idiocy that appears to strike iOS hardware keyboard designers - i.e., it has almost all the keys you’ll need (see below).

Minuses:

  • Logitech keeps getting special keys all wrong. Two of the keys on the top row take me to the global search field, another takes a screenshot (whatever for?) and there is no Esc key – a common flaw on iOS keyboards since time immemorial.
  • The on/off button is tiny, fiddly and, overall, an annoyance for folk with trimmed fingernails. I get that it’s necessary, but I wish I didn’t have to be a guitarist or an extra in The Devil Wears Prada to toggle it without a hitch.
  • It lacks backlighting. That would probably be a challenge with this sort of keyboard, but I’d pay a bit extra for it.
  • The material becomes visibly worn after only a week’s use – in particular, my penchant for hitting the space bar with my right thumb has already left a whitish spot on it.
  • There’s a noticeable tendency for dust to cling to it. Easy to brush off, but annoying.

On the whole, though, it was good bang for the buck, and unless you’re particularly picky about keyboard feel, more than adequate for extended use.


  1. Yeah, I know – the irony, right? ↩︎

Android N’s multi-window multitasking mode 


Announcing SQL Server on Linux 


Pebble Time smartwatches get a $50 price cut