This was a lively week both AI-wise and otherwise–if you’ll pardon the pun.
The upshot was that I didn’t get a lot of sleep, seeing as I spent a good part of the week watching the insanity around deepseek-r1
, reading up on its research paper piecemeal during the evenings and re-jigging my litellm
setup to front for it all.
Artificial Intelligence
The nicest thing about these “reasoning” models is that (at long last) we have some insight into how they approach the output, which makes it somewhat auditable, albeit imperfectly and at a high level even as humans continue to struggle to realize that these things are neither sentient nor truly reasoning.
But hey, at last I can get a decent review for my posts before I publish them now–spell-checkers don’t really get grammar, grammar checkers don’t have enough of a full grasp of sentence structure to flag missing words, and even if you check sentence structure, it’s still nice to have something check if I wrote three paragraphs and actually got to the point.
So I’m now running my posts through o3
or o1-mini
as proofreaders with either my fabric
setup or the editors I’m using. I could have used deepseek-r1
, but Zed has recently added support for o1
models as part of its GitHub Copilot integration, and so I configured Obsidian to match.
Coding
Since I mentioned Zed, I should say I’m quite impressed with it so far–not because of features (it doesn’t have that many that I find compelling), not because it is a direct replacement to VS Code (it sort of isn’t), but because, at least for the moment, it does less stuff better, as well as being more responsive and lightweight.
I blame the increasingly long startup times I’ve been getting in VS Code and the realization that Zed ships with most of what I need, as well as an odd fit to my idiosyncracies–for instance, even if it lacks full git
integration, that isn’t really an issue for me since even inside VS Code I would, 90% of the time, just commit from the terminal anyway.
Mind you, it doesn’t really feel anywhere nearly as integrated in Fedora, at least visually.
Homelab
Even though Tailscale has removed the vast majority of scenarios where I would even think of exposing some of my internal services on a public IP so I could get at them over the Internet, I do have one where it would be great to have a web-based UI accessible via a Cloudflare tunnel, but using an authenticating proxy in front.
Being the sort of person I am, I decided to try and do it in a moderately sane fashion and set up single-sign-on and TOTP using Authelia or Authentik–yes, for a single service, because, well, I might want to add more, etc. So I effectively nerd-swiped myself into setting both up on one of my z83ii boxes, which only has 2GB of RAM.
At least it was another good use case for these KVMs I’ve been testing.
Authentik was promising, but running Postgres, a front-end and a bunch of additional processes that don’t really do anything useful when you have one service and one user was a bit much, and was completely overkill for the hardware. So Authelia it is, even though the OpenID setup can be overly contrived.
But hey, I used to work with much worse IDPs. It’s all about finding the right balance between functionality and resource efficiency, and right now, Authelia is ticking along in under 256MB RAM like a champ, with plenty to spare for both tailscaled
and cloudflared
as well as a KasmVNC container that is standing in for the final service.
As a side note, I’m actually a bit annoyed at how fast KasmVNC is–I would actually have preferred a nice, modern RDP gateway that isn’t Apache Guacamole (because that hefty hunk of Java also can’t run in low-end hardware), but there isn’t really anything simple and Open Source out there that actually works.
So far things are mostly working, but not the way I want to. I’m still troubleshooting some configurations and optimizing performance, so I revived my ancient Acer C720 Chromebook yet again and managed to get Fedora 41 running on it solely for the sake of having a “fresh” thin client.
Tidying Up
My crusade against clutter continues, with a few more attempts into sorting out the contents of some ZIP disks and CDs I extracted a couple of years back.
I’ve been going through old files and deciding what to keep, archive, or discard, and since there are quite a few old screenshots and artwork I took a look at some photo management tools (immich, damselfly, photoprism) in search of something simple and lightweight that supported SQLite–you can probably spot the pattern here, and I suspect I’m just going to hack my wiki engine to index them with an LLM.