Since Alexa has changed the way it handles voice recordings and I don’t feel comfortable with their privacy disclaimers, I decided to retire the Echo Dot I have been using for years in my office and replace it with a HomePod Mini.
This is pretty much how I feel about it.
And of course there are a few catches–for instance, it’s been seven years since the first HomePod was launched, and it is still not officially available in Portugal. So I did some spelunking on eBay to have a second-hand one shipped from the US (which, ironically, was the cheapest way to get it).
The second catch is, of course, that I have gone all-in on Siri on what is likely the worst possible time. And yes, I know that there is absolutely no guarantee that Apple will bring any future “AI Siri” features to current devices.
In fact, there’s a near certainty of the opposite, especially since the product cycle for HomePods is just plain weird and Apple will definitely want to upsell you on to a “ScreenPod”.
And yes, I was very close to getting a Home Assistant Voice and going through the entire rigmarole of setting up a local LLM to handle intents and have a fully offline solution, but that would have been an additional time sink and required a lot more changes to our home automation setup than I deemed useful (or viable in the short term).
Plus, saying “Okay Nabu” just feels ridiculous (and yes, I know a lot about the Wyoming Satellite project and how you can configure wake words, etc. - I just really didn’t want to do any of it until the tech matures another year.
But I digress. Let’s go over my notes on setting up the Orb of Artificial Stupidity (which is what I initially named it on the Home app).
Although I was fully aware of the fact (and mentioned it in the past several times as a reason not to get one), it’s simply asinine that the HomePod Mini doesn’t have an audio out jack.
And I was also aware that the power cable (which terminates in a USB-C plug that requires a PD or Apple power adapter) is non-removable (you can do that on the “normal” model with some effort, but on the mini it’s soldered to the board).
Otherwise, it’s what you’d expect: a single wide-range speaker (with very good bass for the volume) with a few LEDs grafted on top inside a somewhat fashionable (but utterly forgettable) textured sphere.
And the touch controls are… well… not good. Fortunately, I have it tucked away behind my monitor just out of reach, so I don’t try to use them for anything but muting or un-muting it.
I have no issues using the HomePod Mini to listen to podcasts, but music has to be a) in stereo and b) through my 2.1 speaker setup, which has a little passive audio mixer in front to ensure I can have up to four computers or synthesizers permanently plugged in.
So I took the opportunity to upgrade shairport-sync on the Raspberry Pi that manages my office lights, which also acts as a unified streaming receiver via a little audio DAC. I also run a PlexAmp instance there, so it’s been the default “play on” destination for all of my music, but the HomePod can’t stream to anything that doesn’t speak AirPlay 2.0 (or Bluetooth).
Recompiling shairport-sync to support AirPlay 2.0 was a relatively trivial thing, but, again, it is an insanely more contrived process than just using a cable. I’m used to the vagaries of AirPlay and Apple’s silent war against audio jacks across anything that isn’t a MacBook, but it still feels stupid.
Still, at least I was able to keep using the Raspberry Pi as a “unified” audio receiver of sorts.
Once I had everything working, we began the process of training the human intelligences that were to benefit from this. For starters, I disabled speaker recognition (since I needed to have other people be able to set lighting scenes or play music), and everything mostly works… except Siri’s ability to understand context. As usual.
But in this case, it has a couple of extra annoying quirks. In particular, I have to explicitly tell it where to play music all the time. To simplify things, I named my Raspberry Pi ‘s AirPlay service “Desk”, but the HomePod needs very specific guidance to use them.
“Siri, play (that U2 album Apple foisted on everyone) on Desk” works, but given the vagaries of “dumb Siri” context handling, that also means that “Siri, pause” does not work at all from that point onward, because Siri is fundamentally unable to do that unless you specifically tack on “… music on Desk”.
Similarly, I have to be careful to avoid saying “turn off all the lights” even though the HomePod’s location is set to be my Office without adding “… in the office”, otherwise everything in the house will be plunged into darkness.
This, as you might expect, is not a popular outcome for most people.
I also had to rename my “Office Desk Lamp” to “Desk Lamp” because it was getting tedious to say “Siri, turn on the desk lamp” and have it say there was no such thing in my home, even though both the HomePod and the lamp are configured to “be” in the office.
These are all things I was mentally prepared for, since I have been using Siri with my Apple Watch and iPhone for many years now and Stockholm Syndrome nearly prevents me from pointing out, yet again, that the experience of using Siri for anything inside the home has, if anything, been thoroughly and consistently this bad since the very beginning–and, again, I have zero hope of Apple ever fixing it properly.
But hey, we can get it to work and I trust Apple a lot more than I trust Alexa, so even though I am still sore about the loss of the Echo Dot’s audio jack and the inability to stream any of my Plex music to the HomePod I’m telling myself it’s all worth it because I get to rebuild all my playlists on Apple Music.
Which is kind of what Eddie Cue wanted all along, no?
I have been mostly in “inbox zero” mode for the past few weeks, which generally meant checking stuff off my to-do list, trying to ignore the news, and using some fairly colorful language when I didn’t.
I even went out and had fun a couple of weekends in a row, which is somewhat of a novelty and tore chunks of otherwise rather unproductive mulling off my schedule.
That led up to Easter break and a few days in the countryside trying (somewhat unsuccessfully) to catch up on my reading, which I’ve been neglecting of late and I feel somewhat guilty about.
One thing I don’t feel guilty about, though, is pausing most of my AI stuff to see if I can get excited about any of it again. That has been quite the challenge too, even as work takes me down various related rabbit holes.
There’s such a wide gap between expectations around agentic AI and reality that my current survival strategy is to take it all in in small doses to minimize annoyance. Also, every time I type “agentic” I have a minor tussle with whatever form of spell checking happens to be at hand, which is a good reminder that the whole thing is still a bit of a joke.
That doesn’t mean I’ve sworn off AI completely, though. I’ve actually picked up a few more mini-projects, some of which I’m using Gemini 2.5 (via GitHub Copilot’s new Agent mode in Visual Studio Code):
A small set of tools to convert quantized models into something I can use with rkllm or RoCm.
A new touch dashboard based on pygame, because upgrading Chromium on my current dashboard broke the whole thing and I’m tired of web technology bloat (it was either Godot or pygame, but targeting a Raspberry Pi 3 with Godot is far from a trivial endeavor, so I went with something I could deploy and maintain easily).
Replacing a couple of my Tasmota devices with HomeKit-enabled ones (as well as doing some minor upgrades here and there).
Designing and 3D printing a few enclosures and other assorted bits (it’s quite rewarding to churn out PETG spacers to patch cracks in old window blinds in under 20 minutes).
I’ve also been mulling if I should do something to preempt rising hardware costs–for instance, I haven’t given up on getting a Ryzen AI HX machine for testing, since it seems like the only way to get any GPU with over 24GB RAM at an affordable price short of getting a new Mac–which I was budgeting for early next year.
But given the political climate, I think I might bring that forward a bit…
Pete Koomen’s essay takes aim at the current crop of AI apps by comparing them to “horseless carriages” – a neat metaphor for tools that lean too heavily on outdated thinking.
He points out that Gmail’s AI email draft generator misses the mark by forcing a one-size-fits-all tone and then, for an encore, he goes and does an almost perfect common sense prototype that puts a lot more control in the hands of the user.
Letting people customize behavior rather than defaulting to slop is definitely a much better way to use AI, and I’d be here for it if people were actually doing it.
The Synology software makes it dead easy to have encrypted cloud backups to Azure and mirror both my Dropbox and OneDrive contents locally, and those are just two of the features I have relied on over the years–many other people use a lot more of their core software, and pretty much everyone I know has a Plus model.
The TerraMaster F4-424 MAX, for instance, doesn’t do all of those things out of the box, and going custom would be a major pain even for me as I’m just running Proxmox and a bespoke Samba server there, treating it as a dedicated VM and media server. But maybe I should start hedging my bets…
I haven’t written about 3D printing in a while, but I have both kept at it and actually doing a bit of market research–I’ve been considering either building an MMU or getting a multi-material printer, but I haven’t made up my mind yet, and the tariff war isn’t helping.
But one of my checklist features for any upgrade was is having integrated drying capabilities, so when EIBOS reached out to me about their Polyphemus filament dryer, I was intrigued.
The Polyphemus filament dryer next to my SK1.
Disclaimer:EIBOS sent me a Polyphemus and 3Kg expansion kit free of charge, and as usual this article follows my review policy.
Even though all my filament storage boxes have big, hulking car-grade dehumidifier bags, I have felt the need to resort to a dirt cheap filament dryer for PETG and other hygroscopic filaments, since it makes a significant difference in print quality. But the one I had was a simple, bit noisy box with a heater and a timer, and while it works well enough to keep around as a backup, it was a bit of a hassle to use.
I was curious to see how the Polyphemus would fare, but also how much of a difference it would make in terms of the overall experience. Print quality will always be a function of the filament and the printer, but the drying process itself can be a bit of a hassle.
The Polyphemus came in a kit–actually two kits, since I also got the skirt/extension to add enough height for 3Kg spools, which was welcome.
Assembly was pretty straightforward since (other than the extension) it was all done with the exact same kind of screws (always a good thing in kits) and the instructions were abundantly clear, so it took around 30 minutes, and even then mostly because I needed to clear some desk space.
That consisted mostly of laying down a part of the frame, slotting in corner rods (which look like aluminum extrusions but are actually some form of polycarbonate), sliding the acrylic panels in between, and fastening it all together.
I didn’t take any pictures of the assembly process, but here are some of the parts and a couple of pages from the manual, which I found to be very clear and easy to follow:
All of it was pretty straightforward, and you only need the little wrench that comes with the kit.
My first impressions of the Polyphemus just after assembly were very positive:
Even if it is a kit, the build quality is great. The frame is made of what I assume to be polycarbonate “extrusions” that feel solid, and the combination of those and the acrylic panels make it feel solid and well-made.
You get spare parts for the motor and some fixtures, which is a super nice touch.
It has a built-in power supply, which allowed me to get rid of a power brick and rationalize cabling a bit.
The cover is huge, but easy to handle and the lack of a hinge means you can take it off and put it back on easily, although running the filament through the top outlets will always be a two-step operation. But it provides a clear, unobstructed view of the filament rolls from every angle, so any tangles or spooling irregularities are plain to see (this is a marked improvement over my previous dryer, which is only partially transparent and has a fiddly cover).
As I started using it, I was pleasantly surprised by a few nice touches:
A few close ups of the details. Excuse the dust--this is a working space, not a showroom...
It has five outlets for filament–two at the rear next to the dessicant compartments, and three through the top cover (the middle one is for 3Kg spools). I ended up using one of the top outlets for my enclosed printer (as seen above), but I might move it above my printers later on, and having the option to route filament through top or back is quite welcome
It has dual dedicated dessicant compartments, which are easy to access and refill. For now I just tossed in a couple of bags of silica gel, but I will have a go at dropping in granules later on since the grills are fine enough to keep them in place. I don’t see a need to have two kinds of dessicant in there, but I suppose it’s another option.
It is designed to rotate filament rolls automatically as they dry (with a few different speeds), which makes for more uniform drying and is done quietly and slowly enough that it doesn’t cause any tangles.
As to the drying process itself, the display and controls are easy to read and use, with a simple interface I found intuitive and that has presets for most common filaments (PLA, PETG, ABS, TPU, ASA, etc.), including a memory function for custom settings and power loss recovery (which feels like a luxury, but is actually a nice touch I was surprised to come across in the manual).
It is rated to go up to 70oC (which is more than enough for most filaments), and the drying time is adjustable from 30 to 24 hours (with a “permanent” mode that keeps the dryer on indefinitely, but which I haven’t tried). I haven’t tried the permanent mode, but I used the target humidity mode for a couple of rolls of PETG and it worked great–and, importantly, it was very quiet.
I have been using it for almost three weeks now and have had no complaints–I tried mostly PETG because that is what I have the most trouble printing, but both PETG and PLA prints have come out great–I drop in the filament a few hours before, tap out the drying settings, and eventually start printing (typically in late afternoon). I haven’t had any issues with stringing or other artifacts that are common with wet filament, and the prints have been consistent, although I still need to tune the filament settings a bit more to take full advantage of the dryer.
The Polyphemus is a great filament dryer that feels like a luxury item if (like me) you’ve never had something that went beyond heating things based on a timer. The build quality feels solid, the design is well thought out, and the drying itself is effective, although I am not really in a position to do a hyper-scientific comparison with my previous dryer.
But I now have empirical evidence that rotating the filament as it dries makes a substantial difference (at least for the PETG I have been using), and the fact that it has a built-in power supply and a couple of extra features (like the memory function) fit quite well into my setup and workflow, so I am happy to have it around.
It’s been a little while since I last looked at regular mini-PCs, and considering that the market is flooded with cheap Intel N100 and N150 variants, you would think there isn’t a lot of diversity in the EUR 200 range.
I’ve spent the past couple of days watching the tariffs fracas unfolding, and with China’s fresh hiking of truly “reciprocal” tariffs to 34%, things are not looking good at all–but Apple is in a pretty serious bind here, since there are newscasters projecting their phones might become over 40% more expensive in the US, with Chinese restrictions likely harming their sales in China as well.
Tim Cook’s going to have to pull a pretty stressful balancing act on the tariff high-wire as it stretches taut on both directions…
As much as I strive to avoid doomscrolling, the overall feeling here in Europe is not positive. The circus-like political statements, the lack of basic competence in either SIGINT handling or even basic math (the “reciprocal” values look more and more like they were done either on a napkin or using AI without the benefit of natural intelligence), the loss of confidence in a major power that we used to see as ally and the immediate impact on the economy are a tad overwhelming.
The chorus of informed consent that is surfacing around the real consequences that have yet to come, as depicted in this week’s edition of the Economist (in this and other articles I’ve just started reading) are all things I’m actively trying to both ignore and mitigate–although there is very little I can do regarding the latter.
But with the DOW falling 2.000 points and most tech stocks dipping at least 5%, this is already at least as bad as the economic impact of COVID, with the main difference that this is by design. We’re well into the “chaotic evil” quadrant of economic policy.
I’m sick with a sore throat, so I missed the celebrations at the local office.
Having witnessed most of this (40 years of it second-hand as I grew up, and the last 10 directly involved as my hair turns a definite grey), I have mixed feelings–for instance, I don’t have a lot of nostalgia for the the early days of DOS and Windows (or the earlier days of CP/M, which are barely hinted at).
It might be because I ran Windows 2.0 on my first Schneider-branded 286 PC and spent a lot of time putting together new machines during college until I was blown away by how much better the Mac user experience was, but it’s most likely because the latter decade has been a roller coaster.
Still, a lot of the anniversary gallery is fun to peruse. Like with all work things it’s kind of amusing to do so from an outside perspective when you know how the AI sausage is made.
But as far as future prospects are concerned, there is a lot of substance to Microsoft’s approach that allows me to ignore all the hype and hope for the best.
Let’s see how things go from here.
Mar 28th 2025 · 1 min read
·
#apple
#development
#ios
#macos
#sequoia
#snowleopard
#software
#ui
Although there are a few innacuracies here (Snow Leopard was not that much of an improvement, more of a freebie and a way to level set), it can’t be argued that today’s MacOS Sequoia and iOS/iPadOS 18 feel like products patched together by engineers who may not actually use the software they build.
From Messages’ erratic copy/paste behavior and bloated background processes to the counterintuitive UI changes in System Settings (goodbye, easy drag-and-drop display rearrangement), it’s clear that Apple is slipping, and the number of people who replied to me in the Hacker News thread (yeah, I know, HN is kind of a weird place for Apple folk right now) to echo my complaints about Spotlight’s erratic behavior in Sequoia is staggering.
The whole situation shines a… Spotlight (ha!) into frustrations about a platform that seems to become more and more brittle to the point of actively neglecting the basics it rose to prominence on.
Yes, there will always be a bit of rose-tinted longing for the methodical, almost surgical improvements of the Jobs era. But I honestly have no idea how Apple can keep pushing AI features without cleaning house, and whatever they’re doing in the platform teams just isn’t working.
Mar 26th 2025 · 1 min read
·
#apple
#dasher
#lumon
#mac
#severance
#terminal
#tv
#unix
Having loved Severance Season 2 and its amazing marketing campaign (which includes a Lumon presence on LinkedIn and other strokes of genius), I have to say that I actually wish Apple would sell something like this.
I’ve been playing with Anthropic’s MCP for a little while now, and I have a few gripes. I understand it is undergoing a kind of Cambrian explosion right now, but I am not very impressed with it. Maybe it’s echoes of TRON, but I can’t bring myself to like it.
I have now officially reached the point where I have more KVM devices than I actually need (at least for now), and I think this one will be the last for a while–so this is my attempt to finish the series on a high note.
This was quite quick, really. Migicovsky’s announcement for the two new PebbleOS watches came much earlier than I expected, and the design echoes the original Pebble vibe (with upgrades like a 30-day battery and improved buttons) while introducing modern tweaks such as a touchscreen on the Core Time 2. There’s a fair bit of engineering constraint and hustle evident here, but the overall narrative isn’t shy about the rough edges.
I love these, and reading past the watch specs it’s clear that the project is driven by personal passion more than commercial polish, with the reasoning behind each design choice – from sensor selections to display trade-offs – in a refreshingly candid manner.
I’m very much committed to my Apple Watch (the health features, from ECG to sleep tracking, have been very important to me over the past few years), but the Core Time 2 might be a worthy “downgrade”.
Mar 14th 2025 · 5 min read
·
#ai
#apple
#automation
#gruber
#hype
#intelligence
#siri
This is going to be a slightly different review, since it is not about a finished product–I’ve been sent an early sample of ArmSom/BananaPi’s upcoming AI module 7 (which will be up on CrowdSupply soon), and thought I’d share my early impressions of it.
Thane Ruthenis takes a long, detailed look at the latest round of AI hype, carefully dissecting every incremental advance and railing against the notion that bigger models automatically mean smarter or more generally capable systems.
A particular point I loved is that, for all the fanfare, many of these advancements merely shuffle around templates (and juggle user perception) rather than deliver genuine leaps in problem-solving. There’s a dry observation about how benchmarks and “vibe checks” often mask a deeper inability to push past known limits–a point well-made, even if it reads like a list of gripes on overblown promises.
I think this is well worth reading if you appreciate a sober technical critique, since it cuts against the hype that faster scaling and more compute will lead directly to transformative AI. Instead, it hints that what we’re really seeing is a series of elaborate rebrandings designed to maintain investor interest rather than deliver practical, autonomous intelligence.
I very much agree with nearly all of it.
Mar 7th 2025 · 1 min read
·
#ai
#diffusion
#llm
#mercury
#models
#paradigm
#text
Mercury’s claim of being 10x faster than traditional LLMs is intriguing, especially running over 1000 tokens per second on NVIDIA H100s, but it looks like the kind of paradigm shift I pointed out was needed to make inference more efficient. Couple this with dedicated hardware and I think we might have a winner here.
As someone who’s watched the current crop of AI evolve from the ground up, a shift from autoregressive to diffusion models seems promising, albeit not without its own set of challenges.
The parallel processing that diffusion models afford could indeed streamline reasoning and reduce latency, but applying them effectively to text and code is easier said than done.
If Mercury and Inception Labs have managed to crack that nut, it’s worth a deeper look. The potential to cut inference costs while enhancing output quality could make high-end AI more accessible. However, I’d be cautious until we see how these models perform in diverse, real-world scenarios–and I hope someone is working on an Open Source implementation that can be run locally.
Mar 6th 2025 · 3 min read
·
#ai
#aim7
#armsom
#deepseek
#health
#keyboards
#music
#picotracker
#programming
#rockchip
After a couple of weeks fielding a few (un)usually stressful projects, I eventually managed to pull my usual stunt of falling ill during a short vacation, during which I mostly sipped tea, watched TV, and played Hades instead of doing anything productive like, for instance, cleaning up my office, which is a complete mess.
Well, now we know that the Ultra chips have their own lifecycle. And it’s pretty impressive to read that the M3 Ultra has 2x faster performance than the M4 Max, which in itself is already a beast.
But the availability of half a terabyte of (eye-wateringly overpriced) unified memory is a game changer for a lot of people, and I can see this being a hit with the AI crowd.
Although, to be honest, I find the new Air a lot more interesting. I still wish they had a 12” model (and don’t really care about the blue color), but an M4 on an entry-level laptop is very interesting (even if Apple still insists on comparing it with the Intel version for some stupid reason).
We’ll see how they all fare in reviews–especially where power consumption is concerned.
Mar 5th 2025 · 14 min read
·
#10gbe
#2.5gbe
#firewall
#hardware
#ikoolcore
#intel
#marvell
#n150
#networking
#r2max
#review
Reviewing things is an interesting experience–no matter what, I learn something from dealing with the various hardware and software bits I get, and every now and then you come across something you really like despite some challenges along the way.
I have been looking at getting a Ryzen AI Max machine of some kind, since the new 8060s iGPU and its unified memory support make it a very tempting setup for both AI and (let’s face it) gaming, very much like the 7840HS and M780 combo I tested last year.
For me, Framework having given it a bit of their modular treatment is hardly as interesting as the prospect of buying it as a standalone motherboard, and even though the RAM is soldered on I don’t think that will be a problem–although the somewhat far off availability date and their liimited international coverage definitely is.
For those who appreciate a sleek, console-sized gaming rig without the tinkering, Framework’s desktop offers a very compelling option–even if I personally don’t care much for the customizable front panel and other external tweaks.