Five Less Than Kubernetes, But More Fun

Breaking stuff can be relaxing, and that’s what I’ve been doing this Saturday. And for the past month, really. Ever since k3s came out, I’ve been relishing having a minimal Kubernetes distribution whose internals I can tinker with trivially on ARM devices.

I now have three k3s clusters running: a “stable” cluster on Azure where I deploy test services based on publicly available containers, a test cluster where I iterate upon the Azure deployment template I built (yes, I’m reinventing AKS for fun), and my battered old ARM cluster at home, where I’m trying to sort out niggling details like private registry support.

Since that’s largely undocumented at this point, I decided to jot down some notes regarding it, as well as how to set up a trivial ingress using the built-in traefik (which is currently my favorite way to expose container services).

Private Insecure Registry (k3s v0.5.0)

I’ve long run my own instance of registry:2 at home (in fact, I even had my own build of it for older ARM CPUs).

Now that I have sorted out multiarch manifests and push my test images to it from the i7 where I cross-compile, I wanted to have k3s talk to it as well, but it is a royal pain to add TLS support to a private registry inside a LAN, so I prefer to run it via HTTP only.

As it happens, k3s primes CRI-O/containerd via a TOML template that lives in /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl, and to add an insecure registry to it you need to fill out the plugins.cri.registry.mirrors section while preserving the other important bits (like CNI and Docker Hub):

path = "{{ .NodeConfig.Containerd.Opt }}"

stream_server_address = "{{ .NodeConfig.AgentConfig.NodeName }}"
stream_server_port = "10010"
    bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
    conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"

    endpoint = [""]
    endpoint = ["http://registry.lan:5000"]

To deploy this, I have (as is usual for me) a Makefile target that iterates through all the nodes:

NODE_TOKEN:=$(shell sudo cat /var/lib/rancher/k3s/server/node-token)
NODES=node1 node2 node3 node4

    echo ${NODE_TOKEN}

    echo "curl -sfL | K3S_URL=https://master:6443 K3S_TOKEN=${NODE_TOKEN} sh -" | ssh $*

    $(foreach NODE, $(NODES), \
        cat config.toml.tmpl | ssh $(NODE) sudo tee /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl;) 

    -$(foreach NODE, $(NODES), \
        ssh $(NODE) sudo reboot;)

Deploying and Exposing a k3s Service

With the above configuration (and a fresh image in registry.lan), setting up a service with images from my private registry is as simple as this:

apiVersion: apps/v1
kind: Deployment
  name: node-red-deployment
    app: node-red
  replicas: 1
      app: node-red
        app: node-red
      - name: node-red
        image: registry.lan:5000/node-red:slim # my WIP custom build of Node-RED
        imagePullPolicy: Always # grab from the registry every time 
        - name: PUID
          value: "1000"
        - name: PGID
          value: "1000"
        - containerPort: 1880
        - mountPath: /data
          name: data-volume
      - name: data-volume
          path: /srv/state/node-red # /srv is a shared NFS mount in my home setup
          type: Directory
apiVersion: v1
kind: Service
  name: node-red
    app: node-red
  - protocol: TCP
    port: 80
    targetPort: 1880
apiVersion: extensions/v1beta1
kind: Ingress
  name: master.lan
  - host: master.lan
      - backend:
          serviceName: node-red
          servicePort: 80


It’s still a bit of a mouthful, though. I find Kubernetes to be a bit overkill, and its manifests do little to assuage me in that regard.

More to the point, stitching together these stanzas is both error-prone and a chore, but I’ll grant that the end result is (marginally) better than docker-compose for playing around.

Also, things like kubectl --all-namespaces=true get all -o wide make it plain the CLI needs better ergonomics–I can never remember half the options. But I digress.

Obviously, that Ingress is a bit too simple. As it happens master.lan is the master node, which is the only cluster node that can talk to the outside directly, and since my home router does not allow me to define custom DNS entries (no CNAMEs, so I only get an A record for each IP) that limits my options a bit.

But traefik can do a lot more than just mapping hostnames, so I intend to make a few more tweaks.