WWDC 25 Keynote Thoughts
If you discount the completely over the top book-ending (the F1 cameo featuring Craig’s hair and the weird app review medley at the end), there were a few actual surprises in the keynote.
It was a strange month. This update is late since I have had far too much on my mind, and that has also impacted a few projects, but there are a few things I want to note:
I have been doing a little experiment with my feed summarizer over the past couple of months, and I’m very sorry to say that the latest version was pretty much all written using AI.
VS Code and Claude have evolved a lot in terms of integration and my staple approach is still valid, but there have been quite a few annoyances along the way.
Here are some of the main issues I’ve noticed:
Over-Eagerness to process TODO
items:
Most models now tend to churn through TODO
lists unprompted, often without validating whether the resulting code actually works as intended.
Zero Short-Term Memory:
Even with SPEC
, TODO
, and the new .github/copilot-instructions.md
grounding, coding agents still:
Untidiness:
Claude, in particular, is prone to:
test_
files that are frequently re-written and rarely re-used.And I think the key word here is “eager”. The chat interactions have become grating and annoying, so I now just say “Implement item #3 in the TODO.md
and write appropriate tests”.
The upshots are that the current codebase has a lot more logging than I would have ever bothered with and an (arguably) better database schema, but I am not overly impressed. Overall, using AI saved me just a little more time than the time I spent working the problem and explaining to it the issues it created, and it can be more mentally burdensome to type out the stuff I want fixed (because I have to write an all-inclusive prompt to avoid deviations) than to fix it myself.
As to the changes I’ve noticed over time, I suspect this is mostly due to tweaks in the scaffolding (updating the system prompts inside the editor, etc.) and nothing I can really ascribe to any model improvements.
I don’t really like the code, but it is stuff I didn’t really want to write myself. Time will tell how correctly it works, but I am a bit worried about overall quality–not just of my stuff, but of the generation of software that is being written today.
We finally finished watching the second season of Andor, which is, together with Rogue One, undoubtedly the best Star Wars I’ve ever seen. In comparison, all the other Star Wars spinoffs feel like kitschy messes and self-serving director playgrounds that might as well be binned, but I don’t suppose Disney will really get this–we were just lucky.
On a lower note, for the first time in a couple of years I am behind on my regularly scheduled reading to the point where I really need to step things up a tad. In my defense, I wasn’t expecting the Blue Ant Series (which I had actually never read in order) to be this long-winded.
I’m actually posting this a couple of days later, partially because I spotted a lot of traffic towards my disclaimer page and other similar posts from recent past, and partially because it’s been… stressful.
I’m OK. Plenty of other people aren’t. 4% of the company this time included a few of my former Portuguese colleagues, and I feel like I have to bring up last year’s post to remind myself these things are a reality these days.
I will say this one thing, though: I handpicked this particular news item to backlink to because it was one of the first on major media and has some of the most shallow, clueless commentary I’ve ever seen Bloomberg dole out. None of the “news” coverage that mentions AI has any real clue.
It’s been a long while since I last wrote about a piece of music gear (and there are quite a few I got in the intervening years that I have been meaning to write about), but I recently got my hands on the ESI Xsynth and despite only having spent a few weeks with it, I thought I’d write a quick review.
Disclaimer: ESI sent me a review sample of the Xsynth free of charge (for which I thank them), and as usual this article follows my review policy.
The product page has a list of detailed specifications, but there are five things worth highlighting and that I found particularly interesting:
All of it in a surprisingly compact and portable package that only needs a USB-C connection to work. No battery, but the power draw is low enough that you can use it directly with an iPad without needing anything but a cable.
The device feels solidly built, with a mix of stylized metal and plastic that looks and feels nice–at 387mm x 148mm x 27mm with slightly over half a kilo (634g) in weight, it certainly feels solid enough to take on the road, and given that the encoders are fairly compact and don’t protrude much, I would have no qualms about tossing it into a backpack or laptop bag.
The keyboard was a bit of a surprise for me, since for some reason I was expecting mini keys–instead, what you get is a two-octave set that matches the general dimensions of a normal keyboard (which is great if you have trouble switching between mini keys and regular keyboards), although the mechanism and feel is rather different.
I especially like that unlike my Korg NanoKey Studio, both the black and white keys are long enough to interleave and allow you to place your fingers in exactly the same way you’d do it on a regular keybed.
A key aspect is that pressing on the keys anywhere results in a nearly uniform response (velocity and aftertouch-wise). You can tap a key at any corner and it will go down uniformly, which is quite a unique feel, and the evenness of that response makes the the Xsynth feel a lot more usable (and expressive) than the NanoKey Studio or an Akai MPK Mini.
On that topic, the polyphonic aftertouch is a bit of a mixed bag depending on whether you’re used to a piano keyboard or a synth keyboard, since given the keyboard has a very short travel distance, the aftertouch response varies a lot depending on how much pressure you’re used to applying.
Despite having a “real” piano I spend a lot of time on synth keys, so even though I had no trouble with the default velocity curve, aftertouch required some tweaking to work for me to use it effectively–so I recommend using the companion desktop app to tweak things to your liking.
Also, there are no modulation or pitch wheels, just two buttons that are rather short and somewhat unresponsive for my liking. They do work, but I found them somewhat fiddly to use (and, more annoyingly, easy to hit by mistake since, again, I found the orange backlights hard to read).
I spent a while trying out the Xsynth on its own and quickly figured out that I had to use the modulation matrix to send aftertouch values to an oscillator or filter (which not all the built-in presets do), so expect some finagling before you get a result you’re comfortable with.
Both velocity and aftertouch work via MIDI, even though (at least in my case) I have more software with MPE support than “just” aftertouch. In any case, Logic Pro’s Alchemy synth had no trouble with Xsynth input, although, again, a consistent feel was a bit elusive given the limited key travel.
But I can’t think of anything equivalent in this size that has polyphonic aftertouch and all the other features, so I am not complaining too much.
One of the first things that you’ll notice when you turn on the Xsynth is the built-in OLED screen, which is used to show the current patch, parameters, and other information. It is a fairly small screen, but it is bright and clear enough to be readable in most lighting conditions–much more so, ironically, than the orange backlit buttons, which are a bit dimmer than I would have liked and can be hard to read even in completely dark conditions.
The default setting is to render the current waveform, which for me isn’t overly useful–so I disabled that as soon as I could to be able to always see patch names and parameters:
In general, the UI is really straightforward–each of the four left encoders corresponds to a parameter, and you can switch between pages of parameters by hitting the left and right buttons. The far right encoder is used to navigate through patches, and the enter button is used to confirm changes, except in a particular case that gave me a bit of trouble:
You can hit the “Global” to init a patch, and it is far too easy to hit Enter by mistake and completely wipe out the current patch, simply because the “Cancel” button is immediately on top of the “Enter” button and (again) the buttons are a bit dimmer than I would have liked:
So I ended up wiping out a few factory patches before I got used to this quirk. Strangely enough, saving edits to a patch required me to hit the “Enter” button in some circumstances, which is a bit counterintuitive (I kept hitting it once and then wondering why it didn’t save).
I would have preferred a “hold down X for 2 seconds to confirm” type of confirmation for both resetting or saving patches, or at least a “hold down X + Y to confirm” approach for more destructive actions.
But going back to the Global menu, that’s where you can set up MIDI channels and other parameters, including having local audio on or off, which is useful if you want to use the Xsynth as a MIDI controller only:
You get four banks (A-D) of 128 patches, the first two of which have factory presets of various types (bass, lead, pad, etc.). I typically skip over effects and bass patches (it’s just a personal preference), but a few of the leads and pads were quite nice, and I ended up using those as starting points for my own parameter tweaking.
A nice touch is that you can browse through patches using a category mode, which lets you quickly skim through all pads or leads, etc. This is a lot more useful than the usual “scroll through all patches” approach, and it makes it easier to find something that fits your needs.
But there is a surprising amount of flexibility in the synth engine itself:
If you look at the block diagram above, you’ll see that the Xsynth does a few interesting things:
This doesn’t mean that it’s easy to create patches (as with any modern synth, you need to spend a fair amount of time learning the implications of some controls), but it does mean that you can do a lot more with the Xsynth than you might expect.
For me at least, it was much more productive to use the desktop editor to create and edit patches, since it has a much larger screen and a more intuitive interface for editing parameters:
The editor is available for both macOS and Windows, and as a bonus it can also be used to do firmware upgrades and restore factory settings.
In a nice departure from the usual “effects are a separate box” approach, the Xsynth supports a small set of built-in effects that (as you can glean from the diagram above) can also be applied to incoming audio, and fall into roughly four categories (which are mapped to the four right encoders):
The reverb is serviceable and the delay works well enough, and I spent a little while trying to get the Auto-Wah (of all things) to work with the aftertouch. So yes, you can have some fun with the effects, but they are not the main selling point of the Xsynth.
Like I mentioned above, the aux input can also be used to process external audio, and they worked fine with my homebrew DX7 clone, which I hooked up to the Xsynth via both MIDI and audio.
Despite being bigger than I initially expected, the Xsynth feels rugged enough to just drop into a backpack and laptop bag without a lot of concerns, and like I mentioned above, I was able to use it with my iPad Pro 11” with a single USB-C cable and use AUM with it:
I also had no issues using it with GarageBand, and went for AUM mostly because I completely suck at looping and having a keyboard that I can use as an audio interface was a great way to hook up a few of my standalone synths and use them as audio sources for practicing that.
I also had no trouble using the Xsynth as a master keyboard for my Mac and with Bitwig on a Linux laptop1 and it worked well enough with both Logic Pro and Bitwig.
If you want to use the Xsynth as a master keyboard, you can do so by connecting it via USB to your computer and using Midi CC to control your DAW or other software.
What I found most useful is the ability to have multiple pages of CC assignments for the encoders, which you can switch between by hitting the left and right buttons.
In my case, I set it up to control Arturia’s Analog Lab on my Mac (which really demands eight controls, but it’s workable).
I’ve barely scratched the surface of what the Xsynth can do in a couple of weekends, but one of the reasons I wanted to write this review is to share my initial impressions before Summer vacation starts and I have a bit more time to play with it.
The size and comparative ruggedness makes it quite suitable to use on the go, and I can see myself taking it along with me on Summer break instead of the Korg NanoKey Studio, which can’t do much more than act as a MIDI controller and has no built-in synth engine.
It does have a few quirks, but like Floyd Steinberg, who did a video on it recently, I would rate it an 8/10 (or 4/5 stars, if you prefer that rating system) for its price point and features, especially considering the built-in audio interface and range of effects.
For me, other than the software quirks, the only physical changes I’d make would be switching the orange backlights to white and adding more conventional modulation and pitch controls (perhaps control strips).
Given that I prefer to travel with an iPad, I would love a mini-keys version of the Xsynth that I could pack in roughly the same space as the iPad Pro 11”–I will probably update this review after I have done a couple of trips with it and can figure out whether the full-sized keys make up for their size.
But I am quite happy with the Xsynth overall and look forward to diving deeper into its capabilities in the coming months.
I agree with John on this. Sometimes designers want to make their mark so bad on a project they go and gloss over either tradition, established branding or earlier styles that were there for a reason, and the updated Beta 2 icon still does not look like the Finder to me, even if I squint at it without glasses.
Design systems are usually created with a bunch of fluffy “rules” that verge on hyperbole, and this feels like the result from one of those “creative” processes. I bet there is a Figma page someplace with a list of design principles that says something like “an outline for visual demarcation of the icon’s key visual, allowing whitespace to define its boundaries” or some such nonsense.
Just stop fucking with the Finder icon, guys.
It’s been a crazy few weeks, and in keeping with the geopolitical situation, it doesn’t seem like it’s going to get any better soon, so keeping positive things in mind is a bit of a challenge.
Additionally, the weather has been warming up so that office crawls past 27oC most days, which means I am spending as little time in there as possible.
As such, I’ve been trying out a few things I don’t need a desk for. Writeups will appear ASAP, but one has been an intriguing complement to AudioKit’s completely amazing (and totally free) Synth One J6 app for the iPad – however, the lack of musical inspiration due to workload has been too much for me to write about either….
But I finished my tiny SE/30 and compiled an SDL-only version of BasiliskII for it, which is currently running System 7 and (of course) the After Dark screensaver:
The steps to get the core software going (and minimize boot time) are mostly documented in this YAML file (I have no time to do a full write-up, but this is meant to be reproducible).
Like I mentioned a couple of weeks ago I also got a roll of “platinum grey” filament from Polymaker. but since I want to do a version with proper audio, better internals and a slightly different look and there are no STEP files available, I’ve been recreating a usable model from the STLs I have:
…and, since I want to have a period-accurate mouse, I am indeed modeling one – starting with the fittings for a modern three-button one (with wheel). The tiny mice I’m using are cheap enough that I am planning to do a couple of these (a period-accurate one and an ADB version), and I’m definitely not rushing things:
The mechanism I’m using seems to be pretty common in AliExpress US$2 mice, and looks like this (SD card for scale, since I was out of tiny bananas):
There’s a lot more going on, including the fact I’m shifting the static site builder to an ARM VM on Azure that is 30% cheaper (and has twice the cores) than the Intel one that’s been publishing my updates but I’ve also been changing the way I deploy the builder service, so it’s not all there yet.
But I am very happy with the approach I was taking at paring down piku
even further, and a few smaller things are already running that way. I just got distracted by work and the somewhat crazy notion that I should rewrite my RSS feed summarizer in Go just because I wanted to get off the usual “Python is the AI runtime” beaten track–I get enough of that at work.
But I am going all “agentic” on the idea of converting the older 4000-odd entries on this site to pristine Markdown with some sort of AI workflow. That might arguably be an even bigger time sink than anything else since like any other massive AI-driven content or code conversions, validating the results is a nightmare…
This is so cool, and I’m happy that the Mac lineage is being preserved somehow for future generations. There’s a lot to digest here, from the intricacies of PPC emulation to rebuilding disk images for legacy software; it reads like both a technical diary and a casual trip down memory lane.
And as someone who’s just finished 3D printing and setting up a tiny SE/30 and working on improvements (and perhaps a v2), it is great inspiration–I’d love to build a working replica of the G4 iMac I used to own, and this might play a part in it.
I really didn’t have time to comment on The Illusion of Thinking paper before a bunch of knee-jerk reactions started coming in from the “LLM religion” side of the industry, so I decided to let it blow over even though I was really tempted to comment on the more idiotic takes that pointed to the paper as a way for Apple to minimize the importance of LLMs and thus (as the conspiracy theory goes) distract people from having dropped the ball.
There was already enough idiocy online, and feeding both anti-Apple and pro-LLM trolls would benefit nobody.
But this piece does paint a good summary of where we are at, and looking back, it’s interesting to see that many people still assume probabilistic token generation is tantamount to thought (let alone logic) to the point where a single simple, completely deterministic experiment has raised so many hackles.
The key point for me is not the exact test Apple uses–it is, rather that even with a step-by-step solution laid out, the models “overthink” (i.e., there’s a probability cascade that misses the mark) and stumble in a way that’s as revealing as it is disappointing.
I’ve already given up on the AI bubble bursting–there’s too much riding on the premise that the current tech, even as fallible and unwieldy as it is starting to prove itself, can translate into (somewhat elusive) business value and even political clout, so I now expect GPU investments to follow along pretty much the same track as, say, nuclear missiles even as the software model design side keeps exhausting the possibilities of brute force compute.
But I don’t see a lot of reason to complain that Apple has just pointed out that throwing dice and using them for divination isn’t reasoning–it’s always been kind of obvious.
It’s been a few months since I wrote about the Nomad (and around six months since I started using it), so I thought some kind of update would be in order and gathered my notes into a sort of mid-term review.
If you discount the completely over the top book-ending (the F1 cameo featuring Craig’s hair and the weird app review medley at the end), there were a few actual surprises in the keynote.
I first knew about this yesterday as I browsed Hacker News to spend the time while compiling BasiliskII for bare metal on the SE/30 replica I’m building, so it was… strange.
I spent so much time using 680x0-based Macs and reading about the design choices for the ROM and built-in drawing routines that Atkinson’s stuff made a profound impression on me even before folklore.org was a thing, and of course I know most of the anecdotes involving him by heart (especially the lines of code one).
I’ve always found it kind of amusing that Markdown became the lingua franca of LLMs to the point where it is everywhere in the internals of training, finetuning, response parsing, etc., and am not surprised that Apple decided to export from Notes in it at this particular time (to be honest, they’re just late to the party, since I have long had shortcuts and RTF conversions around).
But some of the remarks–especially the concern that “it shouldn’t be possible to have a malformed note in Apple Notes”—strike a chord. It’s clear that, regardless of what you think about storing your data in Notes, retaining the simplicity of the existing interface is key, and that John has his own mind about the whole thing that is completely free of any of the entanglements you might expect.
I’ve been sitting on this draft for a few days now, partly because I thought it would turn down the bitterness, and partly because I kept asking myself whether I should even write it. But I think it is worth getting out of my system, so here goes.
In other times, I would consider this a sign of the apocalypse, but in the current context of the EU’s Digital Markets Act (DMA), it actually seems likely. Considering I use this almost every day and that there are zero alternatives that actually work (remember when we had to use Bluetooth?), I am hardly amused.
I am even less amused by the fact that the EU has pretty much ignored more widely rampant abuses (off the top of my head, the way TVs are sending out advertising data or the way ISPs do traffic shaping and sell your data) while focusing on a feature that is actually useful and works well.
Yes, Apple could have made AirDrop more interoperable, but the fact is that it works well enough for most people, and the alternatives are either non-existent or worse. The EU’s insistence on interoperability in this case seems more like a power play than a genuine concern for user welfare.
They have two choices:
If I were Apple, I would just publish an SDK and an open spec for AirDrop, and let the market sort itself out. Removing it entirely and failing to take the higher road would be a true sign of rot in Cupertino, and I don’t think they are that far gone yet.
This was a busy month, but there were a few things worth noting in between the usual work and family stuff, so here goes.
The pico-mac-nano project is the cutest thing I’ve seen in a while, and reminds me of the old SpritesMods ESP32 hack (which I think might still be a bit more functional), but has its own set of inspired hacks, and is readily available.
In fact, the ready-made version is quite tempting. There’s a wry nod to the old Mac, with references like the recessed T15 bolts and even a custom “Picasso” box for collectors.
I have no real use for it myself, but some sort of functional Mac Plus model on my shelf is still something I’d like to have, and this one comes very close–although I’ll probably whip up something with a Raspberry Pi Zero for more functionality…
It’s early Summer, and the slanted sunlight from beneath the drawn blinds reminds me it’s a balmy 30oC outside in a way the orange arc that frames the temperature complication on my Apple Watch can’t, spraying warmth into the living room, so I retire to the rear balcony and spent an hour reading Pattern Recognition in a slightly faded lounger, the tarp taut with my weight.
I really want to get my hands on a AMD Ryzen AI Max+ PRO 395 machine, although not in laptop form. In between this and the Radeon 8060S iGPU tests, the new AMD APUs are looking like the most interesting way to do local AI inference on PC hardware at reasonable wattage, and the Linux support is already great to begin with.
A little cluster of those (like what Framework demoed using exo
) would be even more amazing, and cheaper (RAM and storage-wise) than a stack of Mac minis…
The netbook era has come and gone, but I am one of the many people who miss small form factor laptops (12” or smaller), and I’ve found it somewhat frustrating that they’ve been nowhere to be found in mainstream offerings.
This dragged me all the way back to last year, and just as last year, it gave me pause.
The impact of what happened last year took many months to shake off, and even though the scenario is different (this feels as much about slimming management layers as it is about rethinking rapid growth), the human impact is even bigger–1,985 people from Redmond alone, today.
One of the challenges outsiders have regarding this kind of situation is internalizing that even a company that just posted strong quarterly results (among the best ever) can still take drastic steps–reminding us that numbers on paper rarely capture the whole story.
And to that topic, Scott Hanselman said “This is a day with a lot of tears,” which I think speaks volumes about the internal (and human) cost of change.
Although I was not privy to any inside baseball and the orgs affected are (so far) quite distant from my endeavors, the article rings true where it points to post-pandemic growth spurts and long overdue org chart calibration as key drivers–although I’m positive there will be the usual misinterpretations about AI “replacing” people and other such idiocy.
As before, the key to “the after” is rediscovering purpose, and I hope the people impacted will quickly be able to do so.
The profusion of hype on the Internet has led me to take a lot of things with a grain of salt, and if you’re a regular reader, you’ll know that generative AI has already added more than a few teaspoons into the broth of LLM-driven coding.
Although I’ve spent many years looking at SBCs, so far nearly all of them have been ARM-based, and this is the first time I’ve had a proper look at a RISC-V system.
This was a bit of an eventful week–at work and otherwise. I seem to be energized by constant context switching, so a peak of project work actually resulted in me needing to “relax” more thoroughly and do a lot more hobby stuff than usual at the expense of sleep (I’ve slept an average of 6h/night over the past month…)
Miguel de Icaza’s tour de force in Swift development (that I have had the priviledge of beta testing, albeit not as consistently as I would like) is now on the App Store.
Xogot brings a full-fledged Godot game development environment to the iPad, and it is very slick indeed. The bundled projects give a good idea of what you can achieve, and it has lots of thoughtful creature comforts.
There’s a dry appeal to watching a traditionally desktop-oriented engine make the jump to mobile (and yes, the notion of building and running games on the same iOS device is intriguing, and pretty much unique, but Godot has been working on Android for years).
This is an amazing technical achievement on all counts, and yes, it proves that you can pack professional-grade tools into a mobile experience, even if Apple doesn’t really want you to and some of the desktop workflows feel a bit alien on an iPad.
Imagine (again) if Apple actually let us do more.
This is one of those times when I must say it is very frustrating that Apple not only killed the 12” MacBook but also refuses to make the iPad a more generally useful device (and I’m actually typing this on an 11” model, just after putting away a comparably-sized Linux laptop I am reviewing).
The new Surface Pro 12-inch goes all-in on USB-C, keeps (but downsizes) the keyboard design and promises about 12 hours of battery life thanks to Qualcomm’s Snapdragon X Plus chip. It also seems reasonably priced at $799. I have been considering getting an ARM machine as part of my next corporate refresh, and I hope this one is on the roster.
I can’t wait until Apple comes to their senses on either of the above, though.
This weekend, I finally migrated my home automation setup off the Raspberry Pi 4 it’s been running on for the last four years and into an LXC container managed by Proxmox on my Beelink U59 Pro.