Notes for June 3-16

These past two weeks were too messy and haphazard to consider as even remotely productive–the and the took over most of it, and having two bank holidays this past week was only helpful in the sense that it dampened down things a bit–but still made it hard to relax.

I had just started a couple of new hardware projects, so I froze them: took notes, labeled all the 3D models with TODOs, put the existing parts into project boxes and am dropping in additional components as they arrive–I’ll eventually get back to them, but not just yet.

In the meantime I retreated into books, TV, routine sysadmin stuff and reaching out to people across Europe while generally trying to completely disconnect from work when I wasn’t on the clock (with the exception of a few calls I had to take during bank holidays).

Apple Intelligence

I’ve been mulling the and trying to read the tea leaves (I had plenty of time to watch a dozen sessions or so), but a few things seem certain:

  • Apple Intelligence will take a while to roll out, and I would be surprised if it were available in the EU at the same time as in the US (even for those of us who are either bilingual or have their phones set to US English).
  • The on-device inference approach seems to validate what I have been playing with–small, quantised models, fine-tuned for local context and common tasks.
  • There is no killer app. I have never believed there was one (except perhaps RAG applied to tailored problem domains), and the lack of emphasis on conversational features is refreshing to say the least.

I haven’t installed any of the OS betas (and am unlikely to, since I want to minimise distractions), but I am curious to see what will actually be available come October.

LLM Consolidation

Although I’m pretty happy with the performance of the AMD M780 iGPU to run phi3:instruct, increasingly warm Summer days and a few ollama bugs prompted me to move all my machine learning stuff back onto borg and rely on the RTX3060 as my main inference node.

My Macs can certainly pull their weight, but I need just one stable API server for my GPT sandboxes, and I’d rather that be CUDA-based.

So I removed the from my cluster, replaced the original SSD with , and turned it back into a Steam box that is perfectly capable of streaming 1080p games around the house.

Hardware Reviews

I’ve got four or five pieces of hardware to finish reviewing, and will be trying to get those published one by one as a way to slowly get myself back on track.

But suffice it to say that, right now, and even as they went for an IPO, I’m positive we’ve hit “peak Raspberry Pi”–there’s no question the RK3588 boards I’ve been testing are solid hardware platforms (and much better than the Raspberry Pi 5 performance-wise), and you don’t really need a “full” Pi for most electronics projects…

This page is referenced in: