Announcing ios-linuxkit: Linux on iPad, the Hard Way

I’m done waiting for Apple to fix things. And one of the things I think should exist is a decent way to run Linux binaries on my iPad.

And after almost six months messing about with ARM emulation in various forms, I can finally do something about it.

ios-linuxkit running on my M1 iPad Pro
ios-linuxkit running on my M1 iPad Pro

Put bluntly, the lack of hypervisor support on should be an embarrassment to Apple–an EUR 1400 iPad Pro with an M4 chip can’t run Docker, can’t run a VM, can’t do any of the things I do daily on an EUR 50 ARM board. Apple has the hardware support, the kernel entitlements, and has chosen to keep it locked away.

ios-linuxkit is my answer to that, or at least as much of an answer as you can get without Apple’s cooperation. It’s a Linux runtime for that provides a working AArch64 userland on iPhone and iPad–shells, compilers, package managers, language runtimes, the lot–without JIT, without RWX memory, without MAP_JIT, without any of the things Apple won’t let you have.

The base is the ish-arm64 branch of iSH, which implements a threaded-code interpreter (they call it “Asbestos”) that translates ARM64 Linux instructions through precompiled gadget dispatch. No runtime code generation means no App Store policy violations, which means it can actually ship. The trade-off is performance–you’re not getting native speed, you’re getting “fast enough for a shell and some compilers.”

It’s fast in human terms, although my use of and Bun mask a lot of the underlying limitations.

Why Now

The timing comes down to converging interests: I have been deep in emulation land since , and even though the ish-arm64’s “gadget” emulator is quite a different beast from the naïve block-level JITs I’ve been bolting onto BasiliskII and SheepShaver, I have been developing all of them on the board I have been testing for a few months, so they share roughly the same approach:

  • Bolt on a VNC server (or an emulated console) so I can connect to it from my iPad
  • Build out several test harnesses (build, base smoke tests, tracing harnesses and automated application testing)
  • Figure out what to do (this is the hard part, and I’ve learned quite a bit across the various emulators)
  • Figure out where it breaks (, , etc.) and why
  • Hand out the drudgery (like test runs and automated fixes) to a piclaw instance in clearly defined piecemeal specs so I get nice reports and debugging output I can review in a clean web UI

I wouldn’t have had the time or energy to do this without Codex, but I certainly wouldn’t have been able to do it without the as a test bed. Having an ARM 12-core SBC with 16GB RAM I could devote to this, despite a tad constraining (I would have preferred 32 so I could run more builds and test matrices concurrently) was a major enabler here.

What I’ve been doing with it

The fork started as a bring-up exercise, but has turned into something more focused: making the runtime stable and tested enough that you can actually develop on it. The current validation gate has 82 core tests passing on Alpine ARM64, with workload coverage across , , Bun, Node, , , Zig, and a few others.

And since I’ve seen quite a few people trying to run AI coding agents on iOS, there’s a separate set of AI CLI harness tests that installs, runs and does cursory tests on most current agent tools (spoiler: Claude Code was a complete and utter pain to get to run. Everyone else’s mostly “just worked” after a few cycles of JS runtime/kernel call cleanup passes, theirs was just broken).

The harness testing is AI-driven–I pointed piclaw at it with a custom gdb skill and let it grind through failures, fix them, and re-run. The strategy is mine (which syscalls to prioritise, what the “gadget” fixes should look like, where to invest in performance), but the mechanical detection/fix loop is the kind of thing that would have taken months by hand.

Why this matters

Because I think it is a thing that should exist, yes, but also because I want to run things like gi (which is still WIP) on my iPad.

Especially binaries, which never ran in the original iSH. And despite my love for remote sessions, I don’t want to run all of it on a server, nor via a UI proxied from somewhere else–I want to do some of it locally, in a terminal, with my workspace on the device.

Bun, V8 and Go work. Alpine’s apk means I can easily get pretty much every single CLI tool I need to work too, without the compromises (which I still love, by the way) imposed. And since I have been hacking away at my own flavor of in rcarmo/ghostty-web, I was able to swap the dated iSH terminal with something that looks right.

It’s not fast (well, it is, much faster than the original, but not native fast). It’s not a replacement for HyperKit on iOS (if we ever get it back). But it’s mine, I can fix it and make it faster to some degree, and works for me.

And since I have zero intention of bringing it to the App Store myself (or even paying Apple $99 for the privilege of running it on my own hardware without plugging my iPad into my laptop weekly, which is something the EU should really ding on Apple for), I am going to maintain it and add more fixes, keeping it open source so that other people can build better, more polished tools.

You’re welcome.

Unexpected Synology Woes

Last weekend my decided, for some unfathomable reason, to stop working after I took it out of the closet, dusted it and put it back, and I have feelings about it.

In fact, I’ve had them throughout the whole week, because it’s taken forever to get most of my home services up again.

Fortunately, my home automation and a few other things are spread among my nodes, but I had a bunch of things running on that NAS, and I wanted to document what happened because someone else might have the same issues I did and end up here.

Symptoms

The machine booted up (power LED initially blinking, solid green status LED, disk activity almost immediately), but would not show up on the network.

Both LAN interfaces would be up, but issued zero packets. No DHCP requests, no link-local addressing, not even replies to arping (and yes, I knew the MAC addresses of the machine, because that’s the kind of thing I keep tabs on). I plugged in my MacBook and my on each interface, rebooted, and saw… nothing.

tcpdump saw nothing at all. I thought it might be some sort of OS glitch (which is why I tried both laptops), but no luck.

So I tried to reset it to factory configuration. You have two reset levels, the first of which only resets your admin password and network settings, the second has you reinstall the OS without losing data.

But nothing worked, and ’s tooling just couldn’t find the NAS or connect to it.

Recovery

The first thing I did was set up Virtual DSM on borg to see if I could, in the direst of emergencies, access our off-site backups. That sort of worked, but the experience was so fiddly that I was reminded of all of HyperBackup’s pitfalls in one fell swoop–most notably that I effectively need a Synology to get at that data, which is not something I want to rely on.

Yes, there is a HyperBackup desktop application. No, it did not work for me–it apparently expects you to download backup files from the cloud to your local machine, and I need to be able to directly restore files from Azure, period.

After filing a ticket with Synology about my unresponsive system, they sent me an AI-generated troubleshooting list, in the middle of which was a step I could not find anywhere in their online documentation: booting the machine without any disks.

That apparently also automatically reset settings (which is, in retrospect, weird, because it feels like something should be stored in the chassis for this kind of emergency), and I was finally able to discover it on the network, reset the admin password, reconfigure the network, etc.

So if you have the same symptoms, this might save your day. And, as it turns out, be the prelude to an entire week of pain, because mine spent the past five days or so grinding through data scrubbing. Because that is a thing it felt like doing, and I’ve been coping with the fallout since then–extremely slow access, very slow response times as I tried to double-check services and settings, etc.

What Didn’t Work Right

First of all, all my containers were gone. Container Manager, for some reason, does not preserve any settings in this scenario, and if I didn’t have installed and a copy of (most of) my stacks in , this would have been enough for me to never again run containers on a Synology.

As it was, I was able to point piclaw to the machine and have it reconstruct all critical services in a few hours (it would have been much faster if it wasn’t doing scrubbing). And, as it turns out, there was also enough residual info in the underlying Docker daemon itself to fill in most of the gaps.

But barring that, there were a bunch of things that made recovery a pretty stressful endeavor:

  • The mobile apps (DS Finder and the like) were useless in finding or diagnosing the issue at every step.
  • The web site did not list disk removal as a troubleshooting step (at least not that I could see, since it went straight into the dual-step reset procedure).
  • The timing documented for holding the reset button for reset 1 (4 seconds) was not accurate. It was more like 20, and I feared for a moment I might end up triggering reset 2, which would require reinstalling the OS.
  • Synology’s desktop tools are, to be brief, very poorly maintained and look like something out of the 90s, even down to the Windows look on macOS.

So even for an “appliance” NAS, the experience could be much better.

Let’s Have an Adventure

Resetting the configuration had zero impact on my data–at least so far as I can tell. Shares, users, all the regular stuff was preserved, and after a few glitches with cloud backups (because disk scrubbing made them fail overnight twice), everything seems in order.

But since the machine spent so long simultaneously scrubbing and swapping as I tried to restore services, it’s clear that I cannot rely on it for interactive use anymore.

Synology doesn’t really let me upgrade RAM on the thing (you sort of can, but it’s already capped at the maximum RAM the J4125 can officially support), so I’ve started removing stuff from it–most of the Docker services I’ve been running there for years are now moving into microVMs or s running elsewhere, and are either going to use the Synology as a “dumb” NAS and mount storage directly, or be backed up to it using Borg Backup Server (which is going to be the only new Docker container running on it).

I’ve already moved and off it, and having them run (even with very constrained resources) on separate microVMs in an N150 makes a world of difference–so much so that I have to wonder why I put up with the J4125’s slowness for years.

I set to snapshot both VMs daily (and added a temporary direct-to-cloud backup), and am now slowly moving the rest. Or, rather piclaw is doing that. I had it draft a plan to group containers and create target VMs/LXCs, and the agent is now merrily ing data and container configs out of the Synology.

Mid-Term

After the dust settles, I am going to move all of my backups out of the Synology ecosystem–I currently rely on HyperBackup to back up my data to , but the recovery attempt was so off-putting that I am going to look into using directly to Azure.

Backrest looks like a nice way to do that, with the added benefit that restic backups (which I have already been using for years) seem to work better with Azure storage tiering (and thus might even be cheaper in the long run).

The Siri For Families Apple Will Never Build

The got me thinking about the one thing I keep wishing would build and almost certainly never will: a family-scoped AI assistant that actually works across all our devices.

I don’t mean a frontier model or a “reasoning engine”–just a competent, context-aware agent that understands my family as a unit. The shared calendar, the school schedules, the medication reminders, who’s picking up whom and when. The kind of thing that Apple Intelligence was supposed to be, except pointed at the problem that would actually matter most to the people who are already deep in the ecosystem and paying for it.

I am married with two kids. Between us we have more Apple devices than I care to count–and we are exactly the demographic Apple loves to put in keynote photos. And yet treats us as completely separate customers who happen to share a credit card. Family Sharing is a permissions layer bolted onto individual accounts, and it shows in every single interaction–shared photo libraries (still broken), purchase management (still confusing), screen time (still adversarial rather than collaborative). Twenty-four years of “digital hub” strategy, and this is where we are.

What I Actually Want

Here’s what a competent family agent could do without being creepy–and in most cases, without even needing to leave the device:

  • Know that my son has a test on Thursday and hasn’t opened the revision material since Monday. A gentle nudge (to him), not a surveillance report.
  • Track our medication schedule and ping people (or me, if an elderly relative misses a window) without turning into a clinical monitoring tool.
  • Surface things on that match what we actually watch, not what the recommendation engine wants us to try.
  • Coordinate pickup times, grocery lists, meal plans–the sort of mundane family logistics that currently live in a group chat and three different apps.
  • Make file sharing work the way a shared family folder should, rather than the absurd permissions mess it currently is.
  • Do smarter photo sharing–not just a wholesale shared library, but understanding who’s took the photos, where and sharing only relevant stuff to family without it being an all-or-nothing proposition.
  • Better family e-mail, better event handling, better package tracking across household members.

I also want it to let me keep my parents and in-laws in the loop. Most of the above also applies to extended family, especially if you have elderly parents who need help managing their medications, appointments, and social connections. A family agent could be a lifeline for them without being intrusive.

None of this is exotic. Apple already does the understated version of some of it–surfacing birthdays, suggesting contacts to call at specific times, the quiet little iOS touches that work well precisely because they don’t try to be clever. A family agent would just be more of that, but with understated functionality across the whole household instead of locked to a single Apple ID.

And none of it requires SOTA models, or selling out to Gemini. A 4B parameter model running on-device–the sort of thing I’ve been for months–would handle the intent parsing and coordination.

The hard part isn’t the AI. It never was. It’s the will, the focus and the willingness to execute, and that’s where Apple has been asleep at the wheel for over a decade–and I am not going to hold my breath that Ternus will be the one to wake them up in things like APIs and interoperability that would actually make this possible by third parties.

They should have an absurd advantage here: they own the OS, the hardware, the sync layer, the health stack, the media stack, the calendar, the reminders. Nobody else even comes close to that vertical. And they’ve done nothing with it.

I know this is possible because I’ve been building something like it myself–a personal agent that fits in a single binary and a database, carries its own scripts and state, and runs on anything from a Raspberry Pi to a desktop. The TypeScript-based version already manages my homelab, files links to my wiki, coordinates across machines, and does it all with about 300MB of RAM (the Go version should take up 30).

I built this on the equivalent of a Raspberry Pi, but Apple can’t do it with a trillion-dollar platform because they won’t treat families as anything other than a billing construct.

Just to add insult to injury, I could do most of what I wanted if we were in the Google ecosystem. But on iCloud it’s impossible to access shared tasklists (or even anything else, really) with any sort of standard protocol and documented API. For Google (or even Outlook), most of it is accessible.

Every Apple equivalent is there, but they just refuse to connect them, or let anyone use them.

The Automation Graveyard

I know I’ve banged on this drum for years, but Apple has spent the better part of a decade systematically breaking OS automation, and they’ve done it so thoroughly that it’s hard to believe it’s accidental.

is on life support. Automator was effectively killed. was supposed to replace both, and instead became an App Store for workflow fragments that nobody maintains and that break with every major OS update. The Shortcuts editor is still painful for anything beyond “open this app and do one thing”, and the integration points with third-party apps range from spotty to fictional.

On , you can set up Tasker automations that trigger on location, time, sensor data, app state, notification content, Bluetooth proximity–and chain them into workflows that persist across OS updates. On Windows, I have a piclaw instance that can drive the entire desktop via a Windows API extension. The gap between what those platforms allow and what Apple permits isn’t narrowing. It’s getting wider.

could have been the foundation for family automation. Instead, it’s a gallery of pretty icons.

Why It Won’t Happen

I suspect the real reason is structural. Apple doesn’t think of families as a product category. They think of them as a collection of individual customers who happen to share a payment method. Every design decision reflects this: iPads are still single-user devices. storage is pooled, but grudgingly, and shared files live in a sort of no-man’s-land. App purchases are shared grudgingly, in a submenu of a submenu. Family Sharing is an afterthought, not a platform.

The only thing that Apple seems to care about (after iMessage) is that we can share what we are watching on Apple TV, which has been relevant in our family for exactly zero minutes since the feature launched.

And until someone at Apple decides that “a household of four using Apple devices” is a use case worth designing for rather than designing around, Siri will remain a single-user voice assistant that can’t reliably set a timer on the right HomePod.

With Ternus coming from hardware, I’d like to think there’s a chance he gets that a trillion-dollar ecosystem ought to handle a shared grocery list. But I’ve been hoping Apple would sort out family sharing since iCloud launched, so I’m not holding my breath here.

I Think I Figured Out What an AI IDE Looks Like

I’ve been mulling the UX arc I’ve been going through over the past couple of years, and I think it was mostly the same for everybody:

  • Copy/paste into a chat web UI
  • IDE with a chat sidebar (, , etc.)
  • TUI chat (Mistral Vibe, pi, Codex CLI, Claude CLI, etc.)
  • Rich chat in a native app (Codex desktop, Claude desktop)
  • Web chat with rich interactive widgets (piclaw)

Since I spend a lot of time on my iPad, piclaw’s web timeline has become my default–I can pop open the terminal or the editor at will, but coding is still a game of balancing drudgery with creativity, and the “creative” part works well in chat.

At least for me, using AI for my projects has been a matter of . If you open a new chat thread for every feature or fix, going back to the editor takes you away from the flow–it’s much easier to have the model spew the changes in the chat, highlight the bits you want changed, and iterate directly in it.

And I’ve just realised, after adding text highlighting and annotation support to the piclaw timeline (to make it easier to point out specific things to the model), that what I’m building is a notebook for code.

I’m sure Stephen Wolfram would be delighted to be proven right, even if this paradigm isn’t really for everybody.

Of course, this scales poorly when refactoring and you have a zillion modified files, but other than refactors I am the kind of person who likes small, testable iterations and still looks at the code.

I also think that being able to scroll back up, fish out an older interaction and re-use it (or riff on it) is powerful, and what I am planning to do next is to inject an editor pane into the web chat to directly review and edit code inline–not as a separate tab, but as part of the conversation flow.

There’s something about this that irks my -addicted brain, of course, but it’s tantalising, and I quite enjoy sitting on the couch with my iPad after a long day in front of my desktop–and yes, using handwriting recognition to prompt it works great; I love living in the future.

Notes for May 3-10

This was a weird week, both because I keep waking up at 5AM with my sinuses clogged, and because I feel like I’m losing momentum. Feeling almost permanently cotton-headed, sleepy due to sheer exhaustion or because of antihistamines certainly has something to do with it, but .

We Must Go Deeper

I spent the latter part of the week hacking away at go-ds4 and go-pherence, which was interesting to me not just because I am still trying to get Vulkan to work for inference on a couple of SBCs, but also because, all of a sudden, a bunch of my stuff converged into SIMD and assembly–including, of all things, an H.264 decoder I plan to add to go-rdp.

This meant going all in on model internals again, which is something I’ve neglected for a while and that I would otherwise find fascinating were it not for my general state of tiredness.

My Little

go-joker went from “forked and interesting” to “actually competitive with Python” in about two days of focused work. Again, there is a weird serendipity and convergence across most of my other projects (like the JITs I’ve been hacking on in macemu-jit and previous-jit), but this time I took out CLR via C# and had Codex build a tiered IR bytecode interpreter that can in turn do compilation via wazero for pure numeric loops, and doesn’t have a GIL (thanks to routines).

I should really write about that, when I feel better.

Android Remoting

As part of an ongoing experiment to see just how far I can go without the Android SDK installed, I kept nudging my Android RDP server along, and am generally very happy with all the automated testing scaffolding I built around that, because I’ve extended it to vibes and piclaw with great success.

My Agentic Work Is Nearly Done

I think piclaw is pretty much done by now. I backported kitty graphics support in the terminal (the ghostty-web ecosystem is pretty amazing on its own), and of course I use it constantly (I am actually typing this draft in it), and I will be doing some fixes and at least one UX release, but I need to go back and fix my Synology, redeploy a bunch of things in my homelab, and prep for more electronics projects.

But first, I’m going to take a nap, because I did wake up at 4AM again, crafted a dead stupid add-on and badly need to rest.

The Local AI Moat

Regular readers will know that I’ve spent most of the past two years shoehorning LLMs into single-board computers, partly as a learning exercise and partly because there are lots of local/”edge” applications where semantic reasoning (no matter how limited) and “interpretation” of sensor data are actually useful.

But now we’re at a point where running a decently useful open weights model on your laptop is entirely feasible.

This comes at what is possibly , and after having started my own inference library and tried hacking away at @antirez’s brilliant hack within my meagre resources, I feel like a serious rift is developing between the “haves” who were lucky to get hardware on time (or can splurge multiple K of European Pesos on it) and the “have nots”.

The societal impact of the entire thing in the always hype-driven geek community is, of course, fascinating (especially since a very small number of people have a disproportionate amount of influence in this little echo chamber), and I sometimes feel like Jane Goodall observing packs of opinionated chimpanzees, but I digress.

Personally, after spending the day mulling on this, I find the whole thing extremely depressing, for three reasons:

  • Despite , I see computers as something inherently distributed and personal. There are a lot of latent contradictions here, yes–I’ve learned to live with them.
  • As an European citizen, the geopolitics of the asymmetrical situation we are in today regarding technology and AI in general , and yes, I have learned to deal with that too, but really wish I could do something about it.
  • Personally, I can’t afford to keep up. People in startups, self-employed or in very specific minuscule niches might be able to spend enough to do so, but I can’t.

I’m thrifty by nature, usually plan (and over-think) my purchases years in advance, am at a point in my career (and the industry) where job security and already had , so saving up every dime I can for a potential rainy day has been very much on my mind and I now agonize over stuff as simple as ordering a 70 EUR battery to revive an eight-year laptop (because, yes, I do still use old machines).

So there is (pardon my French) absolutely no fucking way I am getting decent local inference hardware anytime soon. And I count myself lucky I built when I did, even if that was also a painful decision at the time and it is now hopelessly outdated for most things.

That’s it. I’ve vented. Now I’m going to take something for my sinuses, chase it with an antihistamine, and doze off until 4AM tomorrow.

Notes on GPT 5.x Model Regressions

I’ve been getting annoyed at constant code regressions in piclaw for the past few weeks. Something was off–even after bumping the test suite to the point where it catches most mechanical errors, gpt-5.5 kept making unrelated edits to code that should have been left alone, and I was getting really annoyed at babysitting it.

The pattern was always the same: It would follow a strict spec and then “improve” three other things nobody asked for, and since I am using piclaw and know exactly what the agent does and can trace context and requests, I know it isn’t a harness bug.

So I spent last night investigating, and gave both gpt-5.3-codex and gpt-5.5 the exact same prompt, off clean sessions:

audit this codebase thoroughly for code smells and logic errors and fix them.

Two identical worktrees, two models, same system prompt, same tooling. Reset both, run, compare results. I did this five times, and gpt-5.3-codex produced more complete fixes, caught more subtle issues, and generated more reliable tests in every single run. Not by a slim margin–noticeably, consistently better.

I don’t have hard data beyond “I looked at the diffs and one set was clearly more thorough than the other, five times in a row.” This is anecdotal, heavily tied to the codebase I ran it in, but feels “right” in a way that explains my perception over the past few weeks.

What I think happened

I noticed a similar thing earlier this year when switching between Anthropic’s opus-4.5/4.6 and OpenAI models–gpt models consistently caught structural issues that opus and sonnet glossed over (or just merrily felt were “right”, hippie-style), and its fixes were more surgical. I got used to that gap and worked around it.

What’s odd is that the same gap now exists within OpenAI’s own family. gpt-5.4 was less thorough than gpt-5.3-codex for code work, and gpt-5.5, well… is “worse” in an as yet unspecified way. Yes, the newer models are better at conversation, better at following complex instructions in English, more “pleasant” to interact with–but when you ask them to find every logic error in a 2000-line file, they’re worse at it than their older sibling.

I think they’ve been tuned for broader, more generic behaviours and the code analysis got diluted in the process. “Be helpful across a wide range of tasks” apparently trades off against “be exhaustive and precise about code.” Go figure.

What I’m doing about it

I’m using gpt-5.3-codex as my audit model, and having pi and piclaw switch to it whenever I say “audit”.

It does the hard pass–finding code smells, logic errors, missing edge cases, inconsistent patterns–and I then go back to using the newer models for the conversational work, planning, and tasks where breadth matters more than depth. It also seems to use fewer tokens for the same work, though I don’t have hard data on that because, well, I have a life.

The year-long pattern I’d been following–sketch projects out with opus-4.x, then do the real work with gpt–is now subtly broken. In practice it’s become: use whatever to get started, but run reviews with a -codex model before you trust the output. The combination works, but it’s faintly ridiculous that I’m using an older model to mark the newer one’s homework.

This also means my piclaw instances now run different models for different tasks, which is one more argument for the pi/gi approach of keeping the model layer swappable and the tool surface minimal–something I wrote about in the and touched on in the . If the best code model changes every quarter–and apparently it can change backwards–you want the plumbing to not care.

Notes for April 27 – May 3

This was an absurdly productive week, at least on a personal level. I’m not sure whether to be pleased or worried about the number of projects that moved forward simultaneously, but here we are.

Read More...

Lessons on Building MCP Servers

I’ve been building servers for a while now–I wrote about last year, started out by creating umcp, and I’ve recently opened up an Office server that’s been battered by enough models against enough real documents that the patterns have settled.

Read More...

App Notes: Web App Viewer

I got annoyed enough with Safari Web Apps to write my own replacement.

Read More...

Notes for April 20-26

Amidst the chaos brought upon my usual seasonal allergies, work turned out to be calmer than usual–the usual industry churn and constant rumors of layoffs have made “calmer” a relative term, though–so most of my evenings went to projects.

Read More...

Notes for April 13-19

This was a pretty decent week despite my allergies having kicked in to a point where I have constant headaches, but at least I had quite a bit of fun with my projects.

Read More...

Notes for April 6-12

Thanks to a bit of spillover from Easter break, this was a calmer, more satisfying week where I could actually get stuff done and even have a bit of fun.

Read More...

Apple, Still

I have been having feelings about lately. This blog may have drifted a fair way from its original focus on , but I am still, first and foremost, an Apple user – just not an exclusively Apple user, and perhaps not even a particularly obedient one anymore, since I use both Windows and every day and have grown used to judging platforms by what they let me get done rather than by whatever story they are trying to tell about themselves.

Read More...

The Orange Pi 6 Plus

This was a long one–I spent a fair bit of time with the Orange Pi 6 Plus over the past few months, and what I expected to be a quick look at another fast ARM board turned into one of those test runs where the hardware looks promising on paper, the software is wonky in exactly the wrong places, and you end up diving far more into boot chains, vendor GPU blobs and inference runtimes than you ever intended.

Read More...

Notes for March 30 – April 5

This was a shorter work week partly due to the Easter weekend and partly because I book-ended it with a couple of days off in an attempt to restore personal sanity–only to catch a cold and remain stuck at home.

Read More...

The Xteink X4

I got an Xteink X4 this week, and my first reaction was somewhere between amusement and nostalgia–it is absurdly small, feels a lot better made than I expected for the price, and the form factor harks back to the times when I was reading e-books on Palm PDAs and the original iPod Touch.

Read More...

Hans Zimmer

At least they aren’t from Behringer
Modular synths on stage. Who would have thought?

Notes for March 23–29

Work ate the week again. I’m exhausted, running on fumes, and daylight saving time stole an hour of sleep I could not afford–the biannual clock shuffle is one of those vestigial absurdities that nobody can be bothered to abolish, and I’m starting to take it personally.

Read More...

Archives3D Site Map