I know I’m late to this party, but I’ve been reading John Gruber’s piece “Something Is Rotten in the State of Cupertino”, and finally found time to share my thoughts.
And, to be fair, I at first I let it linger in my RSS reader due to its size, but during a quiet evening yesterday I finally got a few paragraphs down and I thought “OK, John’s really upset about this”.
But I get why. When you see claims like personalized Siri, meant to pull context from your emails, messages, and even your screen, only to have them delayed repeatedly, you can’t help but ask: isn’t this reminiscent of the old “so why the fuck doesn’t it do that?” moment from the Jobs era?
John cuts through the hype with a dry reminder that good marketing must be backed by working technology, and since I too appreciate a firm demo over slick concept videos and glossy commercials I have to say it seems Apple’s credibility is starting to show cracks. Well, new cracks, at least.
While it is true that Tim Cook has transformed Apple into a profitable, well-oiled machine. However, promising breakthrough AI features without providing demos or hands-on trials risks undermining the trust Apple has built-—except perhaps among long-time Apple developers who’ve been burned by similar things in the past.
So there are three points I would like to make here:
- Siri has been woefully mismanaged for many years.
- Apple Intelligence doesn’t just lack focus, it’s unattainable as is.
- Even if Apple could sort out the two points above, they currently lack enough of an automation foundation to make either meaningful.
Siri Has Been Completely Stalled For Years
So, here is what I found on the web about Siri improvements in recent years (and if you don’t get that reference, you must have been using an Android device for the past decade):
Nothing. Nada. Zilch.
Apple’s recent announcements about “more personalized Siri” and Apple Intelligence felt like the same old song and dance from Cupertino—big promises with little substance.
Oh, there are a couple of pieces about corporate reorgs (including this one from January, which seems to fall mostly into the “rearranging deck chairs” category), but essentially nothing got upgraded or improved since 2021 except a shift to on-device speech processing.
Everything else is still largely the same, and anyone who’s asked Siri to do the simplest of things and hasn’t (yet) given up will readily attest to that.
This level of staleness in a voice assistant isn’t unique (we did have a pandemic, and Alexa was also pretty much neglected until Panos Panay moved to Amazon), but given that Apple was already so comically far behind, it’s more likely that the engineering effort required to do anything useful fell somewhere between the cracks of their shift to Apple Silicon and creating bundles to increase services revenue.
A part of me also wants to rant on about the general decay of macOS and iOS in anything other than cosmetics, but we can skip that for the moment.
Apple Intelligence Will Likely Always Under-Deliver
Let’s look at its current state.
As an EU citizen, I have had to finagle my way into actually being able to test a few of Apple Intelligence’s “features”, and the text-oriented ones are only marginally better than my homebrew shortcuts because they’re baked deeper into the system and not restricted to the (woefully unmaintained) Services menu.
But, other than notification summaries, which I’ve kept on solely for wry amusement, I’ve only used them to reply to an actual e-mail once–and that was pretty much by mistake.
I used Image Playground less than a dozen times (mostly to see if I could break the content filtering, honestly), and there are so many free image generation tools out there (and they’re moving so quickly into video) that I feel like I’m wasting perfectly good pixels in pointing out its irrelevance.
The main issue I have with Apple Intelligence is that all the features that have been surfaced so far (except perhaps Writing Tools, which can actually be a powerful accessibility enabler) are utterly pointless, and even if Private Cloud Compute seems like a major technical achievement to the uninitiated, there are loads of confidential computing solutions out there right now.
But here’s another thing: any confidential AI computing solution, regardless of how it manages context and compute, requires a working AI model–preferably one smarter than the one powering Writing Tools. And Apple has, so far, shown zero public interest in creating the kind of frontier models that people like me deal with on a daily basis.
I initially thought that their punting on some replies and deferring to ChatGPT was a way to defer accountability (although they had no compunction in having “older” Siri defer to Wolfram Alpha, something that seems to have stopped), but now I think that Apple is simply incapable (or, most likely, unwilling) to invest in training their own large-scale models and is sticking to gemma
and phi
grade on-device SLMs.
And that might be a bit of a problem, because (taking gemma3
as this week’s example) those models are not really that useful. I have been trying to coax some of them to tackle limited contexts like home automation for a while, and the results are not encouraging.
Prior to Tim Cook, I would hope for something magic being done someplace in a vault under Apple Park. Now, it’s either happening Severance-style or not at all, because I honestly don’t think Apple is that good at software anymore.
There Is No Moat, But Apple Has No Automation Foundation Anymore
Just go into Shortcuts and check what actions you can automate in any of Apple’s core apps (at least the ones that seem relevant to me in the context of a “Smarter Siri”):
- Two actions for mail, none of which tackles searching for an e-mail or getting its contents
- Two for Messages, none of which are contextual
- Seven for Contacts, none of which are contextual either
- Eleven for Calendars (surprisingly)
- Twenty(ish) for Notes (even more surprising - these people are my heroes)
- Eleven for Photos, none of which deal with search or context
- Nine for Reminders
- Three for Home (which is kind of a major thing in “personal contexts”)
There is pretty much zero “public” API surface to do anything remotely like what Apple Intelligence requires to fulfill its promises where it regards context awareness or application actions.
But here’s the thing: even if you consider that Apple would primarily be using some sort of private automation hooks, the foundations for those have long rusted away (or never been implemented in iOS).
I would like to believe that gap, and the apparent extinction of the notion of proper app platform features in macOS and iOS (as well as an overemphasis on sandboxing all the things) is very probably why Apple is blending AI into the apps themselves in a rather uneven way and (along with the blind alley of LLMs) another major stumbling block into building “proper” Apple Intelligence (even if it doesn’t excuse Siri).
As someone who relied a lot on e-mail plugins until Apple effectively killed them, still tries to use JXA and Shortcuts to automate their workflows and bore witness to AppleScript’s progressive demise over the years, I’d say Apple not just painted themselves into a corner here, but they did that to themselves due to their general neglect of macOS and iOS foundational technology.
And, of course, third party developers aren’t happy with this at all–not just because Siri would abstract (and hide away) their apps to fulfill user intents, but also because there is no real return from building app intents atop such a shaky foundation.
In short, Apple is screwed, and it’s kind of ironic that they got bit by their own self-inflicted AI hype without going all in on AI themselves.