It’s been a while since I wrote something sizable about the mobile industry (largely because, as usual, I’m up to my ears in it and try to focus on other stuff at home), but I couldn’t help noticing Charles Stross’ post, largely because nearly everyone I know who entertains the notion that we’re on the verge of a “Telco 2.0” moment (or similar) made sure I’d come across it either via Twitter, RSS or e-mail.
Plus it’s Sunday, and I’ve finally regained the stamina and the peace of mind required to tackle the issues in a dispassionate way.
Before we go on, it’s best to remind folk that yes, I am a Vodafone employee, and that, as my Disclaimer clearly states, whatever is posted here is my opinion. It’s just that I’ve seen this all before, and think I have something of value to add to the arguments being thrown around.
There is no One Phone to break the Matrix
First off, let’s get one thing out of the way: one phone (especially not one that is not on sale yet and, for all intents or purposes, was a Christmas gift to employees) will not make the difference. Two companies won’t make the difference. Things just don’t happen that way, regardless of how much success and mindshare Android, Google or Apple have.
It’s not that I don’t understand Charles Stross’ point in some way – it’s just that I don’t think he can clearly (and dispassionately) see all the pieces of the puzzle. The comments on his post (nearly 40 as I start to type this), however, do point out a number of things that make it unlikely his hypothesis will come true as he envisions it, and besides the missing pieces already outlined in those comments, I think that there are a few things still absent from the whole argument.
The first is that (as is usual in whatever relates to either Apple or Google), most of the arguments being thrown around are US-centric. I keep hammering on this every time people discuss those companies’ strategy in public, and there is of course a tendency for their strategic decisions to consider their home turf first, but people who rely only on “blog bites” and headlines to form opinions are wrong in thinking that the telco space is the same everywhere.
Guess what, it isn’t.
Acronyms don’t just work by themselves
Let’s consider, for instance, the long-lasting mirage of VoIP, in which I have had more than my fair share of entanglement. Curtailing the discussion to the mobile space, it’s easy for me to tackle the issue in a number of ways – for instance, most people have absolutely no idea of the implications of carrying voice atop a mobile network under the current circuit-switched system, let alone the (comparatively expensive) waste of bandwidth involved in adding an IP wrapper around it from the handset onwards, but I can highlight some of the points.
The truth is that most telcos with a modern switching infrastructure have long been using a form of VoIP in their core network for years now, since that was the simplest, cheapest and most effective of the changes involved and had a fair chance of lowering operational costs.
Things from the core outwards, however (in terms of mobility management, backhaul and radio spectrum) are tremendously more complex, so LTE is only part of the answer1 – which doesn’t mean that telcos haven’t planned for it accordingly (again, the US and UK bias of most discussions renders them overly pessimistic…).
But that’s just one end of the argument, and one that pundits love to ignore because they’re typically not involved in telco investment decisions. Let’s look at it from the other end, where they tend to (myopically) focus – the handset.
Not everything is an improvement
You can do VoIP on your phone today. Sure. Great. And, depending on the radio bearer you’re using (i.e., what your phone negotiated with the base station to carry an ungainly overhead of IP packets instead of pure encoded voice), you’re actually wasting nearly four times as much network capacity as the guy besides you doing a perfectly normal voice call.
That’s at least 64Kbps (and some phones will initially ask the network to reserve a lot more for them) instead of typically 12Kbps using AMR, for no good reason, since Mom is actually less likely to hear you properly if there are multiple conversion and transcoding steps involved.
Honey, where’s my backpack?
That inevitably led to the discussion regarding bandwidth, spectrum allocation and whatnot in the comments to Stross’ piece and there are some pretty realistic views there, but (at least while I’m typing this) the discussion misses out on one tiny little barrier involved in making the vision of permanent multi-megabit connections to smart phones possible:
Your battery will run down in no time flat regardless of whatever kind of frequencies, modulation or encoding you use. Unless you fancy strapping on a backpack or redefining the definition of “handset” to include “something best held with both hands”.
The funny thing about this is that the transition to 3G (which, by the way, didn’t have the same drivers, constraints or political context everywhere) ought to have made this plain to most people – battery life is the Achilles’ heel of most mobile devices these days, and it isn’t as if the industry hasn’t tried to fix it – it’s just that physics has these little constraints we can’t get around.
And most people wouldn’t believe the amount of handshake traffic regarding power and code negotiation that goes on before they even place a call on a 3G handset, let alone the veritable bleeding out of batteries to the ether while surfing the Internet on your phone’s browser, and the constant adjustments on either side to make it work for you by constantly re-computing not just the power output your phone should have, but also every other phone’s in the same cell.
That kind of thing you can’t just abstract away. Which reminds me, there’s the slight matter of actually having something with a powerful enough CPU in your hands to make sense of all the data being thrown at it3 and show a cat jumping into a box and fall over.
It’s all in the numbers
Another thing I found funny is that E.164 is “broken” in the eyes of the cool crowd, because it’s a bunch of numbers instead of their favorite username or their e-mail address. Actually, the only reason it’s sort of broken is because it was originally designed for ISDN landlines, and as such numbering had to have geographical significance4 – and of course mobility changed all that.
But the reason it works is that you can actually use it on a global scale with relatively simple, dumb and reliable equipment, and that includes doing number portability across networks and delegating authority over numbering ranges.
Sure, it’s nicer and more user-friendly to call firstname.lastname@example.org, but that’s incredibly short-sighted – how long do you think
office.com will be around for, and what happens if Mom changes jobs (or ISPs?). Come to think of it, what does the track record of SMTP (and Spam) tell you regarding how we’ve managed to use those kinds of addresses on what is probably the simplest IETF protocol ever that uses end-user identifiers directly?
I think that the trouble here is that it’s too easy to write something like “oh, OpenID will “fix” this”, but the truth is that:
- it hasn’t really caught on yet for the Web, let alone other services,
- the current user experience sucks,
- nobody seems to have sat down and worked out the details for doing it on a truly grand scale yet.
And one of its foundations, DNS itself, isn’t getting prettier over the years, because it simply wasn’t designed to become a commercial operation subject to the whims of mis-guided advertisers and domain hijackers.
The Bottom Line
And yet, all of the above share an important aspect that some comments on Stross’ piece hinted at, but that the digerati keep forgetting: the man on the street doesn’t care about these things.
They just want a solution that works, and if there’s something that I’ve learned throughout the years, is that we (as a civilization) keep dissing the stuff that just works and trying to come up with solutions to problems that don’t really exist. Which is usually turned into a worse mess because by the time someone solves a piece of the puzzle, the rest of the world has moved on, and it simply doesn’t fit anymore.
I have a much simpler view: We don’t have a technological (or even economical) problem in telecommunications. Nor is there among telcos an ungodly fear of becoming bit pipes or of new paradigms bringing “level playing fields”, because that simply isn’t the point – a level playing field just means everyone can do the same, and telcos aren’t the ungainly ponderous beasts the digerati love to demonize – they can go out and play in the Internet just like everyone else.
What we do have is a cultural rift between the telco world and the digerati, and it’s somewhat ironic that the folk who keep pushing for new concepts and ideas don’t seem to want to bridge that gap in the first place – especially when folk like me can bear witness to the fact that telcos are the ones changing proportionally faster than their “new” competitors.
1 It is actually a monstrously large part, since it’s not so much as a single technology as, as its name implies, a “long term evolution” plan for current networks. And it’s not “long term” because the telcos want it to be so, it’s “long term” because we’re not talking about child’s play here.
3 Just as the old adage “fast, good, cheap: pick any two”, in the telco world we might as well say “fast, portable, light: pick any two”.
4 In case you don’t know, I was one of the folk working on the software bits of ISDN cards back when it mattered to have Multi-link PPP working across bonded 64Kbps channels. Folk in Portugal using PC-BIT cards can both thank me and curse me for helping debug that particular aspect of it around 15 years ago…