Running NixOS on ARM LXD/LXC

I decided to build a little sandbox to take with me when traveling, and investigated how to do that on my without breaking the entire thing.

Since it runs with and a couple of sandboxes on it already (including a full environment), that was the way to go. It’s not a particularly well documented approach (as usual with or ), so I went and figured out how to do this particularly niche kind of install so you don’t have to.

The same steps should apply to x86_64, although I haven’t yet tried this in for .

Update: I have since figured out how to do it on and written a about it.

Preparing the Container Image

First of all, we need a rootfs. So I went and got an aarch64 tarball from Hydra, because, well, I might as well use the bleeding edge builds.

Then you need to create a metadata.yaml file to use with lxc image import. This is what I went with:

architecture: "aarch64"
creation_date: 1721246798
properties:
    architecture: "aarch64"
    description: "NixOS (266360330)"
    os: "nixos"
    release: "24.11pre"

Packing it and importing it into is trivial once you have that figured out:

# pack the metadata tar -zcvf metadata.tar.gz metadata.yaml

# import the image lxc image import metadata.tar.gz nixos-system-aarch64-linux.tar.xz --alias nixos

You should now be able to see the image in the local repository:

 lxc image list
+-------+--------------+--------+-------------------+--------------+-----------+----------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |    DESCRIPTION    | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+-------------------+--------------+-----------+----------+------------------------------+
| nixos | c359fa7ea1ee | no     | NixOS (266360330) | aarch64      | CONTAINER | 114.92MB | Jul 17, 2024 at 8:07pm (UTC) |
+-------+--------------+--------+-------------------+--------------+-----------+----------+------------------------------+

Running the Container

requires you to have nesting enabled, since it relies on cgroups for runtime isolation of various kinds–otherwise you won’t even be able to log in at the console.

So you need to set security.nesting to true after creating the container:

 lxc launch nixos nix
❯ lxc config set nix security.nesting true

# reboot it for good measure lxc restart nix

You can then have a look at the running configuration:

❯ lxc config show nix
architecture: aarch64
config:
  image.architecture: aarch64
  image.description: NixOS (266360330)
  image.os: nixos
  image.release: 24.11pre
  security.nesting: "true"
  volatile.base_image: c359fa7ea1ee140a2b9957ae1bf57d876f8724f069df04102f4dcfe1156d8f3b
  volatile.cloud-init.instance-id: a0f3333f-940e-4090-8b87-928600d7cf87
  volatile.eth0.host_name: vethc4f4c4ea
  volatile.eth0.hwaddr: 00:16:3e:f9:44:9b
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":10000001},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":10000001}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":165536,"Nsid":0,"Maprange":10000001},{"Isuid":false,"Isgid":true,"Hostid":165536,"Nsid":0,"Maprange":10000001}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 9130125b-b430-4240-a4c9-510067477764
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: ""

…and try to log in by doing lxc console nix.

root is supposed to have a blank password, but I changed it from the host:

# this DOES NOT WORK unless you have security.nesting enabled lxc exec nix -- /run/current-system/sw/bin/passwd

Installing Packages

First thing I did was to get vim running so I could edit configuration.nix in a sane way (I find nano useless except in dire emergencies):

$ nix-channel --update
$ nix-shell --packages vim

Then I added a starting /etc/nixos/configuration.nix:

{ config, pkgs, ... }:

{
  # this line ensures rebuilds work correctly inside LXC/LXD
  imports = [ <nixpkgs/nixos/modules/virtualisation/lxc-container.nix> ];
  boot.isContainer = true;
  environment.systemPackages = with pkgs; [
    vim
    tmux
    htop
    binutils
    man
    starship
  ];
  services.openssh = {
    enable = true;
    settings = {
      AllowUsers = null; # everyone
      PasswordAuthentication = true; # this is just a sandbox
      PermitRootLogin = "yes";
    };
  };
  programs.zsh.enable = true;
  users.defaultUserShell = pkgs.zsh;
}

Once that’s done, you can proceed as usual:

$ nixos-rebuild switch --upgrade

Here’s the usual gratuitous neofetch output:

this is so cliché
Courtesy of Blink Shell after I set up SSH below

Mapping SSH to a host port

A little while later, I decided I wanted to be able to ssh directly into the container.

Since I can access the via Wi-Fi, Bluetooth or directly via the USB Ethernet gadget I couldn’t really bridge the container to a single interface (which would not work via Wi-Fi anyway), so I set up a network profile:

# Create profile lxc profile create proxy-10022

# Add a port forwarding proxy "device" lxc profile device add proxy-10022 hostport22 proxy connect="tcp:127.0.0.1:22” listen="tcp:0.0.0.0:10022”

# Add the profile to the running container lxc profile add nix proxy-10022

This is what the profile looks like, for clarity:

❯ lxc profile show proxy-10022

config: {}
description: ""
devices:
  hostport22:
    connect: tcp:127.0.0.1:22
    listen: tcp:0.0.0.0:10022
    type: proxy
name: proxy-10022
used_by:
- /1.0/instances/nix

Conclusion

It wasn’t that hard. can be a trifle mind-bending, but I’ve used weirder UNIX systems back in the day, and the microscopic intersection with it and makes a lot of things pretty easy to do.

I will be using this to experiment with a few configurations, although I don’t have a deep seated desire to make my homelab configuration more complex just yet– has made a lot of things trivially easy (which is just the way I like my setups), and I’m more interested in as a way of managing edge devices than as a full-blown server.

But who knows, it might grow on me.

This page is referenced in: