Sitting At The Crossroads Of Computing Paradigms

Somewhat apropos , I decided to put forth some of what I’ve been mulling regarding current computing paradigms, since I’ve been pondering things for a while now and I’m not too fond of where we’re headed.

I haven’t been very keen on modern UX trends and application ecosystems for a while now, and this post has been sitting in my drafts folder for a few months. It all started innocently enough–around two years ago, in fact, as I reflected on interaction paradigms.

Despite having made the my primary personal computer, I often find touch-based interfaces annoyingly limited and cumbersome, since it still requires too much effort to, say, manipulate text (undo on iOS, anyone?), edit tabular data, or even drag things about with precision, and a lot of it translates across to web interfaces.

But it’s not as simple as that (it never is), since today computers are whole systems that we can hold in our hands, and we’ve as yet been unable to evolve the UX to a point where it’s consensual across form factors, and (even worse) where the technology is actually fit for purpose.

From Cut and Paste to FrankenUX

Part of my dissatisfaction with is certainly due to mongrel interface conventions that were brought over from desktop environments. In that transition from desktop to mobile, cut and paste was the most notorious offender from the outset, if only because it was in popular demand since the earliest days of the .

And yet, it’s not really suited to touch interfaces–it’s as much of a boon as a hassle to do repeatedly, and automatic text selection has become smarter, but it’s still inconsistent and (mostly) limited to text, and quite often only plaintext at that.

The situation has improved somewhat in with better rich text support in native controls, but cut and paste still works mostly the same, and it remains fundamentally broken in –in some versions, the standard behavior placed an action bar at the top of the screen, forcing you to reach for something too far away from the selection, and it still doesn’t have the same caret positioning UX that has, so pasting text exactly where you want it is a frustrating exercise, to say the least.

This is quite similar to what happened with web interfaces at the beginning. That was more jarring because, after all, web interfaces landed on desktops, and on desktops you were soon acutely aware that the web browser was an unfriendly place to edit things: you could only paste un-styled text into standard controls, and even with contentEditable, things were enough of a mess.

In fact, they are still messy, and I’ve kept track of a number of workarounds since 2009 (I essentially gave up on it over the past two years). But I have a lot more gripes with mobile and web applications, many of them seem to be carrying over to the desktop:

  • They typically require visual interaction (i.e., you have to look at what you’re doing, while you’re doing it, killing multitasking)
  • They have very low information density
  • They usually lack input alternatives ( is notorious here for its lackluster input device support, but web UIs usually have no keyboard commands either)
  • Lack of affordances and discoverability in favor of flashy visuals
  • “De-contenting” or removing features to sell “pro” versions

The only consistent upside from mobile and web UX (which is largely driven by their business goals) seems to be an increased awareness of interaction flows and how to streamline interactions to save users’ time–and maximize profitability for their creators.

The Electron Mess

The downside, for me at least, is that thanks to a warped variant on back-propagation the desktop app experience is deteriorating before our very eyes. The collected body of human-computer interaction expertise (that’s what we “old” people called UX before it became a hipster minefield) numbers in the thousands of man-years, but it troubles me somewhat that we’re letting it all fall into the hands of the Electron hordes and their massively inefficient, one-size-fits-all excuse for cross-platform development.

I get it that doing native development these days is complex, and that the learning curve might be daunting. I myself am writing a small Cocoa-based app for my own use, and really wish there was a widely accepted universal GUI toolkit, since if I ever want to port it to Windows (which is unlikely, but possible) I am going to have to rewrite a lot of it.

In much the same way, I fully get that the vast majority of people doing user interface work these days are largely focused on web technologies, but (and this is the crux of the matter for me) there is no excuse for mediocrity on any platform, and the explosive combination of the language with Chromium technology is likely to be the death sentence for well-designed, responsive and, above all, usable desktop applications.

Yes, you can do excellent things with Electron. VS Code is a cardinal example of that I use daily on multiple platforms (although with increasing frustration as extensions start eroding responsiveness and speed). But VS Code has a very thin “native” interaction model (as a development tool, it falls outside “regular” interaction patterns), and is written (and its UX designed) by programmers for programmers.

Whereas most people doing Electron apps for non-technical users typically lack the skills to do so responsibly, and just churn out half-baked, RAM-intensive garbage that I typically uninstall on sight1 because the developer trade-offs (single codebase, common code, etc.) do not compensate for my grievances when attempting to use those applications.

And yes, I am very critical of as a core language for complex applications, and am constantly coming across reasons to dislike its ecosystem and ethos–only the other day I was reviewing a widely-used package to figure out how it handled a certain piece of user input, and the only validation that string input had was liberal use of indexOf to check for separators instead of, you know, actually validating the data.

That is not to say that programmers are generally incompetent, but every time I hear someone saying Electron applications are “lowering the barrier to access” or “delivering faster” I actually hear “lowering the bar for hiring resources” and “delivering shoddier releases”.

For all the industry’s emphasis on Agile, I’ve seen more half-baked releases of software than just about anything else, and I see no sign of that abating even with transpilers and supporting technologies.

Going beyond the technical and the cosmetic, the non-native control ratatouille Electron foists upon users is also the death of accessibility and assistive technologies, not because of technology constraints, but largely due to human incompetence, developer ignorance and a general lack of effort that echoes the overall approach to QA in the ecosystem itself.

As an example, these piddling 92 lines of Markdown appear to be the entirety of Electron’s accessibility documentation, and pretty much nobody pays any attention to that matter unless you have a top-tier mainstream app like Slack or Teams.

The Useless Network Computer

It is often argued by Electron advocates that the browser-based UX we live in these days is the only sensible thing to develop for (some even call it the “post-desktop” experience), and that they are actually “enriching” the experience by barfing -laden hairballs onto desktops for lower latencies and direct device access.

And it’s true that we’ve yet to settle on one working “desktop” metaphor even as we rush headlong into uneasy coexistence with touch environments (, anyone?) and that computers (pocketable and otherwise) are largely used for media consumption.

But in that outreach for the masses we’ve lost (and keep losing) the ability to actually do something with most applications except fill out half-baked forms, and I blame the web browser. Not the paradigm per se, but the stupefyingly limited ways in which you can interact with data and content on the web today, and which I run up against every day even as I try to build working windows onto data.

Oh, sure, you can build “rich, interactive experiences” with , and I come across a couple of new ones every week at my job, but they are all dead ends from a work/enterprise/productivity perspective.

Take data visualization, in which I have : D3 is an excellent example of how has pushed the envelope so far that even I would be hard pressed to use anything else for serious visualization work, but you can’t grab the data behind it and shove it onto a spreadsheet (or any other kind of data tooling) without a good deal of work (quick CSV dumps don’t count, and even then only a handful of solutions I’ve seen provide such affordances to web users).

Web interfaces are the least common visual denominator and they suit our monkey brains visually, but they are intractable and fundamentally broken for actual manipulation. But even if we’ve managed to mimic many “rich” interactions inside single web pages, they are still just smaller data silos.

Silos, but intentional

And speaking about silos, another aspect of the new computing model that I’ve been mulling is the willful design of applications (be they standalone apps on your device or online services) as silos where your data is effectively trapped, and the death of the computer as a generic data manipulation tool. Let’s ignore business data for a bit, and take PIM data (contacts, calendar, etc.) as an example.

Your address book is, comparatively, relatively easy to move from one service to another, largely because companies have realized it’s critical for customer acquisition and because the schemas involved are actually standardized–partially because if you can’t move your data in easily, then you’re not likely to use a service, and because RFC 6530 harks back to an era when standards were drafted by people without vested interests, which means that address book data exchange has been largely solved for a good while now (even though I’m still amazed at the number of failed attempts at doing so over the years).

Mass market was also a key driver there, and mobile tech made things (too) easy: , in particular, is notorious for making it embarrassingly easy for your app to become a contacts provider, leading to a mess of possible duplicates and half-cocked strategies for linking them in your local address book (with nearly pathological security and privacy issues, but that’s another issue).

And yet, there is no easy way for you to enrich contact information on your own machine without resorting to third-party services that are designed, from the ground up, to vacuum your data and keep it in a cloud service or a proprietary app2. It gets enriched, sure, but you can’t get the enriched data out.

In much the same way, documents and other “generic” data have become hostage of SaaS solutions of various kinds, with even getting into the act, and more hairballs being thrown at variations on the “rich editing” experience–again, with little to no interoperability with desktop environments or even across applications.

The Platform Gambit

Ironically (for a Mac and user), I’m also increasingly biased against platform lock-in. Besides the usual criticism of the marked difference between yearly price increases and actual improvement across hardware and software3, or OS-mandated default applications and services (which I actually object little to in , but find irritating), I’m more worried about the lack of innovation.

I’m going to take the long way around it and start with hardware, since I’m also irritated by the death of repair (everything is hyper-fragile, glued or soldered) and the death of expansion options (no removable batteries or expandable storage), but I get them because they were necessary (up to a point) to deliver the kind of power and performance we can stuff into a trouser pocket today.

Even my current employer (Microsoft) ships densely-packed, vertically integrated hardware like the Surface Pro (which I love to bits) and the Surface Studio (which I honestly wish I could afford), because that kind of deep integration is the only way to improve the overall user experience, and people who design hardware properly have an innate understanding of who their user is–they do not treat users as mere eyeballs, and care about the feel of their deliverables.

And yet, nobody seems to have an iota of common sense regarding end-to-end hardware and software integration. knows how to do vertical integration on the hardware side and had massive success with the curated App Store model in the mass market for the sake of convenience and security but has no clue how to merge mobile and desktop UX, and Chrome OS are being painfully stitched together and have effectively no reference platform (or even a standard version), and Windows has to deal with so much Intel legacy that even Surface devices still have occasional driver issues.

So it’s no surprise that integrating from the OS on up is such a mess, and that actually standardizing on Chromium for web browsers (and business applications, and maybe even clocking off Electron with something tamer) actually seems like a sane thing in comparison.

The Web, Only The Web

I stopped caring about front-end development back when it became apparent that front-end development frameworks had to spend 80% of their code abstracting away browser features. Or rather, just before people just started making stuff up on the spot to compensate for the shortcomings of the DOM either feature-wise or consistency-wise, and I’m not even blaming here.

Web development has been broken in various ways for years, and it’s actually a good thing to take a stab at having a single consolidated runtime for it, as long as it stops just short of having every minor component become Turing-complete and self-aware. But I worry about things like security, power efficiency, lobbying, and someone baking in proprietary technology, surveillance enablers and fashionista tech of various kinds4, because there has to be room for innovation, and having what boils down to a single rendering engine is going to be yet another way for things to break, but with the potential to do so on a massive scale.

I’m worried about lack of innovation in the DOM, in web scripting, in interoperability between web applications, and, most importantly, in overall quality, richness and productivity of the whole experience of using computers, because the Web has become more of a shopfront than a playground, and you never create things while sitting in front of shopfronts.

So yes, I expect that we, as an industry, will keep glorifying web forms and SaaS solutions to the extent where they will become the only form of UX we will ever need, and remain gleefully ignorant of the irony of perpetuating the 5250 legacy atop terminal hardware that dwarfs the mainframes of yore by several orders of magnitude.

There will be a few holdouts on the desktop, which will have to be as native as possible–games (which have no UI conventions and need to run natively, or ), media creation (music and video apps), and, generally, anything else that needs direct access to hardware for either performance or functionality.

On mobile, things will likely remain as they are for the foreseeable future–web applications simply aren’t good enough for a lot of use cases (no matter how many APIs people try to shoehorn into them), and it is the one domain where applications have the potential to become richer in terms of interactivity and graphics to a point well beyond what is possible with web technologies.

Summary

I’m positive there will be much ranting and crying and baying about Chromium taking over the web in one form or another, but I am more concerned about that newfound uniformity lowering standards and decreasing the incentive to innovate even further.

After all, web technologies (even in the messy state they are in right now) gave us Electron because it was easier to cheat at doing proper desktop UX than the alternative, and that is something that nobody appears to have internalized and decided to fix.

My only remaining hope here is that, like Ethernet (which was essentially the simplest thing that could possibly work at 1Mbps, and yet keeps getting pushed out into the Gb range), web technologies improve over time.

The thing is, Ethernet was simple and elegant. Web technologies are a complex, virulent mess that we will need to replace at some point in the future, and we keep piling on technical debt.


  1. I still refuse to use the Slack desktop client, for instance, and not just because of Teams (which I also ran in a browser until it officially replaced Skype for Business). ↩︎

  2. This business concept waxed and waned several times in the heydays of mobile operators (who most definitely wanted to own your address book) and as pretty popular in social network aggregators. GDPR seems to have made it unappealing for a while. ↩︎

  3. Where by “software” I mean the very few applications they don’t give away for free such as Final Cut Pro, since pretty much everything else (including macOS) is falling into various states of disrepair, but are effectively gratis. ↩︎

  4. I stopped caring about support for proprietary document formats back in the GIF wars, and can almost forgive Mozilla for PDF.js, but I draw the line at allowing the modern Adobe to push anything past a standards committee. ↩︎