I wasn’t terribly excited about Apple’s announcements yesterday, but the fact that the Magic Mouse’s charging port was only “fixed” by making it USB-C was definitely a low point until I read this, which I rate as either a perfect piece of satire or a self-infliced massacre on the one hill you wouldn’t want to die on defending Apple’s design choices.
I’m just not sure which yet.
Apple seems to think that forcing users to go wireless is a feature, not a bug (which is a bit rich, if you ask me), and I would have stuck to my old AA battery model if it wasn’t based on an old Bluetooth version that now throws up a warning on modern Macs. So I switched to a Logitech M720 and never looked back.
And yeah, it’s kind of funny to read John’s defense and realize he’s been using a Lenovo mouse all along.
Oct 27th 2024 · 3 min read
·
#cooling
#copper
#heatsink
#intel
#n100
#radxa
#sbc
#x4
Back when I reviewed the Radxa X4, I went to some trouble to go over its thermal characteristics and how well that massive heatsink worked.
At the time, I resorted to using my own thermal pad to get the best possible contact between the CPU and the heatsink, but apparently that was not enough, because the board still ran a bit hot under load.
This weekend I decided to do something about that–I took one of the SSD copper heatsinks I had lying around, verified that it was thin enough (it was exactly 1mm), cut it to match the heatsink contact plate, and applied some thermal paste:
To verify that this was not a fluke, I ran the same load test I had used in my original review, and the temperatures peaked at 48°C, staying well below the 96°C I got last time:
Even better, the board had a steadier CPU boost to 3GHz at the start, and held up nicely at 2.5GHz for the duration of the test, without the slight throttling I had observed before:
I’m quite pleased with these results, and I’m going to leave the shim in place to (finally) do some testing with Windows. It’s a simple, effective solution that doesn’t require any permanent modifications to the board, and it seems to work quite well.
I should note that the original testing was done in August and at higher ambient temperatures (even if, according to my office thermometer, by only about 4°C), but even then the difference is quite noticeable. I’m not sure if the thermal pad I used was sub-par (it was what I had on hand that bridged the pretty wide gap between the X4’s CPU and the heatsink), but the shim definitely seems to have done the trick.
Also, I kept the heatsink fan, but I’m not sure it’s even necessary anymore. I’ll have to do some more testing to see if it makes a difference in practice (I suspect it will, but I’m not sure how much).
Oct 26th 2024 · 14 min read
·
#arm
#debian
#friendlyelec
#hardware
#nas
#nvme
#openmediavault
#review
#rk3588
#rockchip
#zfs
This one took me a long time, and it was actually the third or fourth RK3588 board I got to review–but the timing of its arrival was completely off (it was in the middle of a work crisis) and I had to put it on the back burner for a while.
Then a bunch of other things happened, and a few months later I eventually got four WD Blue SN580 1TB NVMe SSDs at a good price and I could finally set it up.
Disclaimer: FriendlyELEC supplied me with a CM3588 compute module and NAS board free of charge, and I bought the SSDs and case with my own money. As usual, this post follows my review policy.
In the meantime a lot of people have looked at this board (it’s become a bit of an Internet sensation), so as usual I’ll try to focus on the things that I found interesting or that I think haven’t been covered in detail elsewhere.
A key aspect of the NAS kit is that it is a two-part system: the CM3588 compute module and the NAS board. There are now a few different variants of the compute module, but the NAS board is a single-purpose carrier board that provides all the necessary I/O.
Let’s get the specs out of the way first, starting with the module:
RK3588 SoC (4xA76/2.4GHz cores, 4xA55/1.8GHz cores, plus a Mali-G610 MP4 GPU and the usual 6-TOPS NPU)
16GB LPDDR4X rated at 2133MHz (which is great, as it’s a lot more than the usual 4GB or 8GB I got on most SBCs)
64GB eMMC (mine came with OpenMediaVault pre-installed, but I’ll get to that in a bit)
The module itself is 55x65mm in size and is not interchangeable with, say, the Raspberry Pi compute module as it has a custom form factor that is only compatible with the NAS board, and the various SKU options for the kit are all based on variations of the RAM and eMMC sizes (there is also a “Plus” variant that adds even more RAM and storage as well as faster 2400MHz RAM).
All the I/O is on the carrier, which is quite a bit larger than usual and compares more to a nano-ITX board than a typical SBC. The board itself is 153x116mm and looks like this:
Out of all the above, the most interesting bits are:
4xPCIe 3.0x1 NVMe SSD slots (which are the star of the show)
A single 2.5 GbE port
Three HDMI ports (two outputs and one input, just like the Orange Pi 5+)
Three USB-A ports (two USB 3.0 and one USB 2.0)
One USB-C port (USB 3.0/DP 1.4)
The usual 40-pin GPIO header
Two MIPI CSI/DSI connectors on the underside
Barrel Jack (for a 12V 3A PSU, which was included in my box)
And a plethora of other connectors and buttons (the usual MASKROM and reset buttons, a power button, and a user button, but also alternate power, RTC clock and fan connectors for a change)
One of the reasons I wanted to test it is that this is possibly the only RK3588 configuration I’ve seen that seems to fully use all the PCIe lanes the SoC provides–and also seems to overshoot that a little, since the Realtek NIC is also on the PCI bus (as a 2.0 device), so internal bandwidth might actually be over-subscribed–it’s hard to tell from the schematics as the module exposes dedicated PCIe lanes for the NVMe slots and an MDI interface for the LAN port.
Which, I think, is also why you don’t get Wi-Fi or a second LAN port. Digging into the schematics a bit shows only one PCIe 3.0 line to the outside world:
…but four PCIe 3.0 lanes to the NVMe slots:
And given the main selling point of the board is the NVMe slots, I was quite interested to see how this setup would perform in practice.
As mentioned above, I populated the board with four WD Blue SN580 1TB NVMe SSDs. These are completely overkill (they are PCIe 4.0x4 devices, and as such much faster than the board supports), but I have future designs for them–and I also got an M.2 to SATA adapter so I can re-use the NAS Kit later in a more conventional way.
Since I had shelved this project for a couple of months, I had forgotten that the CM3588 shipped with OpenMediaVault pre-installed on the eMMC–which was a bit of a relief since I was expecting to have to go through the usual dance of using the Rockchip tools to install an OS I could work with.
Digging further, I realized that FriendlyELEC has put together a very clean setup–the SD card images you download from their site will automatically install to the eMMC (if present), and the OS is Debian 12 without any frills and kernel 6.1 (which I have since automatically upgraded to 6.1.57 without any hitches).
Moreover, apt.sources.list and friends all listed perfectly ordinary Debian sources and the official OMV package repositories, without any proprietary repos (and no mystery meat packages I could see), so instead of wiping it and installing vanilla Armbian and Proxmox on top, I decided to take OMV for a spin (I lost track of it years ago, and I was curious to see how it had evolved).
But the key point here is that as such things go in the SBC world this is pretty excellent software support right off the bat, and I didn’t feel any need to use Armbian even though the board is community supported.
I took a few different passes at this, starting with getting a feel for what was already installed:
There are a few things of note here:
You get comprehensive LVM storage management and SMART monitoring out of the box
System monitoring is also quite good
File services include Samba, NFS, and my old buddy rsync, which is nice
You get UPS support out of the box
You also get comprehensive network service and user account support
But I wanted to do more–I wanted to see how I could handle virtualization, and, most importantly, I also wanted to set up ZFS for my testing.
So I set to work installing things the OMV way, by using the Web UI to add plugins for those–and I was quite surprised to see that almost everything worked out of the box.
I did have a few relevant issues, though:
/etc/apt/sources.list.d/omvdocker.list (which tried to point to the mainstream Docker package repository) was broken, so I could only install Debian’s docker.io package instead of the mainstream one
The LXC bridge didn’t work out of the box, but that was because the lxc-net service wasn’t running (which in turn was because I had to create /run/resolvconf manually)
The ZFS plugin didn’t work out of the box, but that was because the kernel sources weren’t installed (which I’ll get to in a bit)
All of these seem like things that should have been sorted out by the plugins themselves, and I acknowledge that other than the ZFS issue, neither OMV’s ARM packagers nor FriendlyELEC probably expected me to use the NAS kit in the way I did–but I think it’s worth mentioning that the plugins could be a bit more robust.
I tried installing ZFS by just activating the OMV ZFS plugin, but that didn’t really work–I had the UI components loaded, but there were no ZFS kernel modules.
As it turned out, DKMS kernel configuration didn’t work because the kernel sources weren’t actually installed–which was weird, since I would expect that to be sorted out by the plugin.
Fortunately, the fix was easy–I just had to SSH in, install the kernel headers (I could have done that via the GUI as well, but I wanted to check loaded modules), and then re-run the ZFS plugin setup.
I then set autotrim=on and turned on lz4 compression using the GUI, and was good to go:
…and this gave me only 10K IOPS at 667MiB/s, which is quite lower than, say the 13K I got from the Orange Pi 5+ and a bit disappointing–but which can be explained by the fact that ZFS compression was on, since all the CPU cores fired up to 100%.
I then turned off compression and ran the test again, and got 13K IOPS at 882MiB/s–with the CPUs pretty heavily loaded but peaking at 95% instead of fully saturated.
So there’s a trade-off here–and given that we’re stuck with PCIe 3.0, I actually think either value is OK. After all, the CPU compression overhead might be a decent compromise if you intend to use ZFS to host development containers or serve compressible data.
Hitting the server from the network proved that it had zero issues serving data at line speed while peaking at 45-50% CPU–I got the equivalent of 2.3Gbps on a single SMB/CIFS connection, and I don’t think it would have any issues saturating a 5GBps link.
I think this is perfectly OK for a fast home NAS, and the only real bottleneck seems to be the built-in Realtek 2.5GbE NIC.
I have a 5GbE USB adapter (and it worked), but I didn’t run a proper end-to-end test–that will be coming in a later post after I switch to Proxmox and (with luck) find the time to re-wire one run of my home network.
And why would I switch to Proxmox, you might ask? Well, I wanted to see how well OMV could handle virtualization–and the answer is “not very well”.
This is not FriendlyELEC’s fault (especially since they don’t control the plugins that OMV uses), but it’s worth mentioning that it’s a clunky experience.
The Docker plugin worked out of the box–I was expecting to have to install the docker.io package and then manually configure the plugin, but it was already there and working. The Compose one, though, took a little bit of fiddling to get right, but to be honest I don’t like its reliance on setting up shared folders for the compose files–even if you do get a workable GUI to manage them:
However, I didn’t spend any time fiddling with Compose, as I typically rely on Portainer–I just wanted to see how viable it was out-of-the-box, and the answer is that it’s perfectly OK.
LXC support was a bit more interesting–it’s joined at the hip to KVM via virt-tools, and besides the fact that KVM didn’t work out of the box (it assumes x86_64 hosts, which is wrong), it was a bit of a pain to set up.
Besides having to reset and reconfigure the LXC bridge (which even after being set up dies every now and then), accessing LXC containers via the console is a bit of a pain–you have to SSH in and then use virsh to access the console:
sudovirsh-clxc:///consoleomv-test
This, together with the fact that I prefer Proxmox and rely on it already to run, backup and migrate LXC containers across all my other ARM devices, is just another reason why I’ll be wiping the OMV install and replacing it with Proxmox–but, again, OMV is fine if you don’t need virtualization, and I was actually pleasantly surprised by how well it worked as a NAS.
The massive heatsink on the CM3588 module is quite effective–I never saw the CPU go above 85°C even under full load, and the clock stayed pegged at 1.8GHz for most of the test, only dropping to 1.4GHz after 2 minutes of full load–and bouncing back up again when the temperature dropped to 75°C.
The CPU performance, however, was a bit under what I’ve seen from other RK3588 devices–I got 13.889 tokens/s, which is a bit lower than the 15.37 I got from the Banana Pi M7 (with the same model version), but I think that might be due to some variation in RAM speeds (and maybe just a bit better cooling on the M7, since it dissipates heat through the case).
The NAS kit is a bit power-hungry, though–with the SSDs installed, I measured 5W at idle and 14-19W from the wall (using one of my Zigbee sockets) under load, which despite low in the grand scheme of things is enough to give me pause.
By the way, the board has a voltage sensor, which is a nice touch:
A couple of weeks ago, after getting the SSDs and starting testing, I finally ordered the case–which is a very sturdy piece of kit, and includes ventilation holes, an RTC clock battery and and a fan.
Most of my testing was done without the case, but I did install it for the final ollama run and temperature checks, and am pretty happy with it–it’s compact, sturdy, well made (the feet contain screws that hold both halves together in an ingenious way) and has a lot of ventilation.
I should mention that the fan supplied with the case was… very disappointing. The 40x40x7mm 5V fan was very noisy and had almost unbearable coil whine both while ramping up and at peak speeds, and wins my personal award for the worst fan I’ve ever used.
So all the tests were done without the fan, and I honestly don’t think it’s necessary for most use cases–the heatsink seems more than enough to keep the system cool.
However, I have ordered a couple of replacements since my server closet gets rather toasty in Summer, and the current temperature isn’t really representative of what I’ll see next year. I’ll update this post with the results once I get them.
Hardware-wise, I am quite happy with the CM3588 NAS kit–it’s a well thought-out solution that works very well out of the box if you want a small fire-and-forget NAS.
NVMe storage is still more expensive than SATA (spinning rust or otherwise), but there is clearly a trend towards NVMe in the small NAS world, and even at PCIe 3.0 speeds it’s a good fit for a home or small office server. Most people will be perfectly happy with the Docker support to run a few containers, but I need a bit more control and flexibility–and that’s where Proxmox comes in.
My current plan is to get Proxmox running on it and set up the ZFS pool to use both for consolidating all my ARM LXC containers and provide at least 2TB of fast shared storage for machine learning models and maybe even video editing (which I can’t do off my Synology). That will, however, require a bit more storage, which is a bit out of my budget at the moment.
Consolidating all (or at least most) of my ARM devices into a single server will compensate for the slightly higher idle power consumption, and the combination of ZFS RAID and Proxmox (with both LXC snapshots and migration) will make it easy to preserve my home automation setup and other services.
I’ll put up a follow-up post once I’ve done that, and I’ll also be testing the 5GbE USB adapter and the M.2 to SATA adapter I got for the SSDs–so stay tuned.
Oct 23rd 2024 · 1 min read
·
#apple
#ios
#ipad
#mini
#review
#technology
The reviews for the 2024 iPad mini with A17 Pro are coming out, and Federico’s is, as usual, very positive and optimistic, but IMHO both him and Jason Snell stray a bit too far into the realm of fantasy and try to draw a dotted line to a future foldable iPhone, which I think is a bit of a stretch.
A foldable (any foldable) would not replace the iPad mini neither functionally neither price-wise (when it comes, it’s going to be the most expensive iPhone ever sold), and would be in stark contrast to Apple’s current approach at building the iPad mini out of binned parts–the current iPad mini is, as someone else put it (besides David Pierce, who also generally agrees with this sentiment), “a product that only a supply chain would love”, and very much a staple of the Tim Cook era.
And that is just not the Apple I grew up with.
I do agree with Federico that the iPad mini has always been my “third place” away from the phone and laptop–but for me, it’s actually the first thing I pick up every day and a device I always travel with instead of my iPad Pro, and as such I think this upgrade does it a major disservice by having a gimped, binned surplus CPU and lacking a few key Pro-like features (like Stage Manager on an external display).
I would even go so far as to claim that the iPad Pro (not the mini) is the true outlier in Apple’s product range–Apple has crippled iPads so much that having Pro features on the mini and using a laptop instead of the Pro actually makes much more sense–I could do without my Pro, but I can’t do without my mini, and that is exactly why this particular upgrade cycle sucks so much.
Oct 20th 2024 · 1 min read
·
#coding
#emulation
#entertainment
#golang
#networking
#notes
#sbc
Other than my link blog, I usually start writing about stuff weeks (if not months) in advance, but due to all the recent changes my stack of drafts hasn’t been tended to properly.
I have a lot on my mind (also about industry and tech), but haven’t yet beefed up my reasoning about a few things or whittled down the salient points to be sharp enough for posting–so Rands’ recent post was a good reminder that I should get back to it.
I revisited my options from two years ago and alighted on the same answer. Late night poking at code is still a good way to relax, and Go almost completely eliminates a lot of hassle of C++, so it’s been a good way to deal with some low-level stuff.
As a bonus, LLMs are pretty decent at generating Go code (the usual joke about Go being designed for average programmers notwithstanding), so a lot of the drudgery can be smoothed out that way–like converting Python code in bulk (with a lot of revisions, but it works).
I have three single-board computers on my desk right now, all of them very different to the point were testing them will take longer than usual.
So I might actually post two sets of notes on each just to get something done.
I also have a couple of nice 5Gbps network adapters from wisdPI. These play quite nicely with the Sodola switches, but testing them beyond 2.5Gbps has been a bit of a challenge due to various hardware changes–they work fine when plugged into a 10Gbps SPF, but I can currently only test one at a time.
My emulation quest continued for a little while, but rather than playing games, one of the unexpected ways I’ve managed to relax and take my mind off work has been revisiting The Expanse and Frasier. The former is still unbeatable, and the latter still feels remarkably fresh–plus watching it with my two boys is just icing on the cake.
Dario Amodei’s essay is a fascinating dive into the potential upsides of powerful AI, but it feels a bit like a tech utopia wrapped in a cautionary tale. While he rightly emphasizes the need to address risks, the optimism about AI’s ability to solve complex global issues—like health and governance—might be a touch overzealous. Sure, AI could revolutionize biology and mental health, but let’s not forget the human element (or lack thereof) in these grand plans.
The idea that we could compress a century of progress into a mere decade is enticing, but it raises eyebrows. After all, history has shown us that technological advancements often come with their own set of challenges—think of the unintended consequences of social media. So, while the vision of a “compressed 21st century” is compelling, it’s essential to remain grounded in the reality that technology alone won’t save us; it’s how we choose to wield it that will truly matter.
And we just don’t have a good enough track record of that, honestly. Furthermore, LLMs still feel like a party trick gone viral–as token shuffling and prediction machinery, they lack true understanding, the ability to truly learn in real-time (associative memory, for starters), and, more importantly, any kind of agency. Which is why I am convinced that a second AI Winter is increasingly likely as we continue throwing money (and brute force compute) at these problems without understanding them.
Oct 15th 2024 · 1 min read
·
#apple
#intelligence
#ipad
#performance
#technology
I added the quotes because no matter how they sugarcoat the A17 Pro, it’s not the upgrade I wanted for my mini 5 in either CPU, display, camera or anything else short of the USB-C port and TouchID (yes, I prefer TouchID).
Given the PR-only prerelease and outrageously spaced out refreshes it’s obvious the mini isn’t a priority for Apple, so I have to figure out if I want to address the fact that the 256GB cellular model is closer to €1000 than I would like or wait another two years to upgrade.
Update: Also, there is no SIM slot–the cellular model is eSIM only. That’s a dealbreaker for me right now.
I’m definitely going to wait for the reviews on this one.
Oct 15th 2024 · 7 min read
·
#balance
#career
#life
#memoir
#microsoft
#personal
#reflections
#work
Let’s not beat about the bush: I have no compunction about classifying this as my worst year at the company (and believe me, there were a few doozies before that).
I don’t think I’ve ever used so many different keyboards in a year. But I am happy that things have been improving so fast that I can actually feel the difference between them.
My initial thought when reading this was “this is just stupid”. Then I thought about the added complexity involved over just mirroring the display, and I thought it was probably a great idea UX-wise (on the lines of Continuity), but, in the end, I still think it’s a profoundly stupid implementation, for the following reasons:
Screen mirroring shouldn’t require creating local application stubs at all.
Even if it does for ease of use, there’s no reason to manage them like this. This is just lazy engineering, they should be in a database (where encrypting the contents and metadata with the user’s keychain would be entirely doable).
I shudder to think what will happen if this isn’t on-demand and if you try to sync hundreds of app stubs.
Oct 7th 2024 · 6 min read
·
#arm
#armbian
#benchmarking
#cooling
#hardware
#linux
#ollama
#power
#reviews
#rk3576
#rockchip
I’m down with the first flu of the season, so I thought I’d write up my notes on the Banana Pi M5 Pro and how it’s fared as part of my increasingly eclectic collection of single-board computers in the post-Raspberry Pi age.
Gaming hasn’t exactly been one of my most intensive pastimes of late–it has its highs and lows, and typically tends to be more intensive in the cold seasons. But this year, with the public demise of Switch emulation and my interest in preserving a time capsule of my high-performance Zelda setup, I decided to create a VM snapshot of it and (re)document the process.
The amazing thing for me is that only last week I heard Nilay Patel say on the VergeCast that virtual nametags would be the killer app for smart glasses.
Well, two Harvard students took that notion and ran away with it, using Ray-Ban Meta glasses to dox people in real-time using AI and public databases. And, get this–most of the tech and databases already exist. They’re just paired with consumer gadgets that make them easier to abuse.
Even as an exercise in raising awareness, it’s plain that the horse is already out of the barn, so… Welcome to the panopticon, I guess?
Oct 1st 2024 · 1 min read
·
#dotnet
#emulation
#gaming
#nintendo
#ryujinx
#switch
After yuzu and Citra, the last working Nintendo Switch emulator is now officially gone.
Fortunately, I had the forethought of writing a release mirror script that was keeping local copies of the latest 5 releases (which I’m also using to also automatically download OrcaSlicer and custom CAD software builds that are squirreled away in weird repositories) and I know that some people had gone the extra length to actively mirror the git repositories of every emulator they used, so… I have faith that Nintendo’s whack-a-mole game isn’t over yet.
As far as I’m concerned, though, and given that I am quite enjoying slowly trying to finish Breath of the Wild in glorious 4K (my kids have already finished both BOTW and Tears of the Kingdom, but I have lagged way behind), I will be safekeeping all the binaries.
Furthermore, we will move full-on to PC gaming with Steam family sharing (the AceMagic AM18 is quite likely to move back into the living room next weekend).
After all, investing time and money on console games that have such a limited shelf life and are created by a company that insists on destroying its own history (remember the 3DS?) feels like a fools’ errand, and I’m definitely not keen on getting another Nintendo console ever again.
I’m still playing catch-up on a lot of things, and I just noticed Zynthian upgraded to the Raspberry Pi 5. The kit is on the expensive side (even without factoring in the Pi itself), but seems like a solid upgrade–I just wish they had a more compact version with TRS MIDI and smaller audio jacks.
I’m starting to bounce back from the last couple of weeks, but am still not fully there yet. Work remains far too much of a rollercoaster (mostly because I care too much, to be honest), and I’m still trying to find my footing.
As much as I love virtualizing machines in my homelab and putting the hardware as far away from my desk as possible, the truth of the matter is that there always comes a time when you need “physical” access to the host to deal with boot issues, change BIOS configurations and other types of housekeeping–and regular remote access just won’t cut it in those times.
Good luck with that. My next TV might still be an LG (just because I usually like their hardware and price points), but like the current one, it will be blocked at my router from accessing the Internet (HomeKit still works, of course, and I use my Apple TV and NVIDIA SHIELD for media).
But this is something the EU should really look into and regulate. The amount of data TV manufacturers are collecting and brokering seems to have fallen off their radar, which is just plain stupid considering all the privacy and GDPR implications.
And it’s a bold move on LG’s part, considering most folks just want to see their family photos or some calming art while they’re not actively binge-watching. Even if you can turn it off, the default setting is a bit of a slap in the face for anyone who thought they’d bought a premium product free from such annoyances.
This shift towards monetizing every idle moment on your TV is a slippery slope. It’s not just about selling hardware anymore; it’s about squeezing every last cent from customers, and brokering the data to get more revenue. And while LG claims this will boost brand awareness, one has to wonder if viewers will just tune out entirely (or worse, switch to a platform that respects their downtime). As the lines blur between content and advertising, it feels like we’re all just one step closer to a world where even our screen savers are working overtime.
The timing for this is great, as I’m starting to get back to shoving LLMs into single-board computers. The 128K token context length seems to be becoming a standard of sorts (which is also nice), and I might actually try the vision models as well (with the usual caveats about SBC NPUs being quite limited in both TOPS and data formats).
And, of course, the “open” part is… Interesting. I’m becoming somewhat skeptical as to anyone’s ability to guarantee either the lineage of their training data or the feasibility of fine-tuning atop newer models trained on what are essentially summaries of data from older models, but time will tell. For now, it’s nice to see Meta pushing the more pragmatic parts of LLMs (i.e., availability and performance at small scale).
I’ve always loved Dieter Rams’ work and have been considering printing Scott Yu-Jan’s design for a couple of weeks now, but this is so much better–less bulky, more streamlined, more functional, and (a nice bonus for me) provides a nice little tray for me to put my glasses in during the night.
The only question right now is what color to print it in (I have that orange, but I suspect the Home Aesthetics Committee wouldn’t approve).
I’ll also be keeping an eye out for a 12 Mini version–I suspect some enterprising souls will cobble up a Fusion 360 parametric version pretty soon (I’m very tempted to try my hand at doing one in OpenSCAD, but the lack of an easy way to round off all of those edges makes it impractical to do it in that).
This is pretty amazing, although I am very sad that my Quest 2 isn’t supported. I do like the way you can preview your project alongside, and find the promise of real-time editing and debugging in a virtual space particularly intriguing. And I love that it is Godot doing it, now, on consumer hardware.
That quick feedback loop is something that has always been missing from VR/XR development, and having seen far too many “digital twin” projects in manufacturing and energy hampered by long development and testing cycles, I can’t help but wonder how usable it actually is in practice.
Haiku R1/beta5 has been out for a while, and I completely lost track of it until today.
Not overly enthusiastic about the cosmetics (dark mode should really have an option to change tab color as well), but as someone who actually ran BeOS in the past I find it… original, for sure.
I remain mostly unfazed by the lack of a working ARM port (I’ve been saying this for years), and I will eventually try it on a junk laptop, but can’t really say I’d consider it as a daily driver these days.
No Apple Intelligence yet, and in the short time I’ve had since installing it I’ve already come up against a couple of bugs and inconsistent behavior in Mail, and pretty much zero visible improvement other than in Safari.
Mind you, iOS 18.0 is much worse, a couple of my critical shortcuts have broken already (including the ones I use to post these link entries, which I’ve had to refactor yet again), and I’m quite annoyed at Apple right now…
But hey, we get native window tiling! It is not anywhere as complete or polished as Moom 4, but at least it’s there, and both basic snapping and resizing tile boundaries works well enough for me to survive on a laptop.
Regardless, this release draws a very stiff line in the sand where it regards Apple Silicon vs. Intel feature support. Oh, and it’s somewhat amusing to see OpenCore Legacy Patcher mentioned as a possible upgrade path for older machines…
Although most single-board computers these days ship in a “full” Raspberry Pi 3/4/5 “B” form-factor or larger, I have been on the lookout for Zero variants for a long time, and the Geniatech XPI-3566-Zero is the latest one I’ve tested.