The Haves and The Have Nots

One thing I’ve noticed over the past few weeks is the huge gap in both perception and availability of .

William Gibson is, again, right: The future is already here—it’s just not very evenly distributed.

Take a dive into the churning cesspool of Twitter/X, and it’s pretty obvious: on one side we have obvious influencers (and people with what has to be significant disposable income) claiming to run with the most expensive Claude Opus subscriptions, and on the other side people who are bargain-hunting inference endpoints or trying to make do with open-source models.

But what amazes me are the people who are paying north of $20,000 for fully kitted out Mac Studios—sometimes in pairs or trios—to run local inference. I will freely admit I wish I had that amount of money lying around (I am still trundling along with a 3060 and a pretty much vanilla MacBook), but the key thing I’m noticing is that people without the means to access frontier models (or even decently performing local ones) are quite likely not going to be able to form any sort of grounded opinion on AI usefulness.

Open-source models are being hyped as one of the things that will change this, but the latest GLM and Kimi releases (or even the Qwen3 variants) all require an eye-watering amount of resources, so they aren’t going to help bridge this gap.

The result is that we have a growing divide between the haves and the have-nots in the space. Those with access to powerful models and resources are able to experiment, innovate, and create new applications, while those without are left behind, or (even worse) unable even to realize what is happening.

I have no solution for this, especially not when personal hardware is getting so expensive, but I hope we’ll sort it out over the next few years—preferably by moving models to the edge and making them more efficient, which has been for a while now.