After my review of the CM3588 running OpenMediaVault, I spent a little while trying to get it to run Proxmox with ZFS enabled.
There still isn’t any official version of Proxmox for ARM64, so I stuck to Proxmox-Port, which has been working great for me on all the Rockchip devices I tested it on.
Getting ZFS to work proved a bit challenging, through.
I initially tried using the official CM3588 Debian 12 image, but I simply could not get zfs-dkms
to work–the module errored out when loading, and I decided to cheat–I reinstalled the OpenMediaVault image, removed all its packages, reinstalled Proxmox-Port on top, and… Nothing. Same error messages.
I eventually realized that was because even though the kernel versions (running, source, module, etc.) were all the same, the magic number and build options for the running kernel weren’t. That was odd, since I got them to work with OpenMediaVault before…
So I went back through my notes and realized that I had to install both the headers and kernel image that ship with the FriendlyELEC image:
❯ ls -al /opt/archives
total 38728
drwxr-xr-x 2 root root 4096 Aug 16 09:36 .
drwxr-xr-x 3 root root 4096 Aug 16 09:36 ..
-rw-r--r-- 1 root root 9001344 Aug 16 09:36 linux-headers-6.1.57_6.1.57-21_arm64.deb
-rw-r--r-- 1 root root 30645296 Aug 16 09:36 linux-image-6.1.57_6.1.57-21_arm64.deb
I think these are also present on the vanilla Debian 12 image, but since I had already reinstalled the machine twice I wasn’t in the mood to do it again, and this fixed everything:
❯ dpkg -i /opt/archives/*
❯ apt install --reinstall zfs-dkms zfsutils-linux
After a reboot, I was able to set up the pool via the Proxmox web UI (although I still had to do zpool set autotrim=on local-zfs
to enable TRIM):
❯ zpool status
pool: local-zfs
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
local-zfs ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
nvme-WD_Blue_SN580_1TB_242734802371 ONLINE 0 0 0
nvme-WD_Blue_SN580_1TB_242734800584_1 ONLINE 0 0 0
nvme-WD_Blue_SN580_1TB_242734801916_1 ONLINE 0 0 0
nvme-WD_Blue_SN580_1TB_242734801947 ONLINE 0 0 0
errors: No known data errors
This has been working OK for a few days, and I’ve since migrated most of my ARM64 containers onto the CM3588–the eMMC is reserved for Proxmox, and everything else is running off ZFS.
Setting up CIFS/SMB
But since my containers currently take up less than 300GB, I also wanted to keep using the machine as a NAS and share the spare volume capacity.
Since I wanted to have minimal overhead, I created an Ubuntu 24.01 LXC container, installed samba
and had it mount the ZFS pool directly:
❯ zfs create local-zfs/folders
❯ pct set 203 -mp0 /local-zfs/folders/shares,mp=/mnt/shares
For testing, I whipped up a quick Mac-friendly test share inside the container:
❯ cat /etc/samba/smb.conf
[global]
workgroup = WORKGROUP
server string = %h LXC (Ubuntu ARM64)
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
panic action = /usr/share/samba/panic-action %d
server role = standalone server
obey pam restrictions = yes
unix password sync = yes
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
pam password change = yes
map to guest = bad user
usershare allow guests = yes
; Mac settings
mdns name = mdns
gfs objects = catia fruit streams_xattr
vfs objects = catia fruit streams_xattr
fruit:aapl = yes
fruit:model = Xserve
fruit:posix_rename = yes
fruit:veto_appledouble = no
fruit:wipe_intentionally_left_blank_rfork = yes
fruit:delete_empty_adfiles = yes
fruit:encoding = native
fruit:metadata = stream
fruit:zero_file_id = yes
fruit:nfs_aces = no
[scratch]
public = yes
writeable = yes
path = /mnt/shares/scratch
guest ok = yes
force user = nobody
force group = nogroup
create mask = 0777
directory mask = 0777
I also tried using the 45 Drives cockpit plugin, but I had some trouble getting it to work on Debian, so I parked that idea for the moment.
Either way, using LXC and a sub-volume seems like a pattern to follow when I replicate this on an Intel machine–I see a lot of people doing PCI pass-through of their SATA controllers into a TrueNAS VM, but I honestly don’t see the point of the added complexity for a small homelab, and I can set CPU/RAM/storage quotas on this setup just as easily (if not more so) than I could with a VM.
Update: I’ve yet to get KVM to work, but I suspect (given the issues I had with OpenMediaVault earlier) that it is unlikely to work with this kernel. Right now I’d rather have ZFS than KVM, so that’s a trade-off I’m willing to make.