Breaking stuff can be relaxing, and that’s what I’ve been doing this Saturday. And for the past month, really. Ever since k3s
came out, I’ve been relishing having a minimal Kubernetes distribution whose internals I can tinker with trivially on ARM devices.
I now have three k3s
clusters running: a “stable” cluster on Azure where I deploy test services based on publicly available containers, a test cluster where I iterate upon the Azure deployment template I built (yes, I’m reinventing AKS for fun), and my battered old ARM cluster at home, where I’m trying to sort out niggling details like private registry support.
Since that’s largely undocumented at this point, I decided to jot down some notes regarding it, as well as how to set up a trivial ingress using the built-in traefik
(which is currently my favorite way to expose container services).
Private Insecure Registry (k3s v0.5.0)
I’ve long run my own instance of registry:2
at home (in fact, I even had my own build of it for older ARM CPUs).
Now that I have sorted out multiarch manifests and push my test images to it from the i7 where I cross-compile, I wanted to have k3s
talk to it as well, but it is a royal pain to add TLS support to a private registry inside a LAN, so I prefer to run it via HTTP only.
As it happens, k3s
primes CRI-O/containerd
via a TOML template that lives in /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
, and to add an insecure registry to it you need to fill out the plugins.cri.registry.mirrors
section while preserving the other important bits (like CNI and Docker Hub):
[plugins.opt]
path = "{{ .NodeConfig.Containerd.Opt }}"
[plugins.cri]
stream_server_address = "{{ .NodeConfig.AgentConfig.NodeName }}"
stream_server_port = "10010"
[plugins.cri.cni]
bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins.cri.registry.mirrors."registry.lan:5000"]
endpoint = ["http://registry.lan:5000"]
To deploy this, I have (as is usual for me) a Makefile
target that iterates through all the nodes:
NODE_TOKEN:=$(shell sudo cat /var/lib/rancher/k3s/server/node-token)
NODES=node1 node2 node3 node4
debug:
echo ${NODE_TOKEN}
add-%:
echo "curl -sfL https://get.k3s.io | K3S_URL=https://master:6443 K3S_TOKEN=${NODE_TOKEN} sh -" | ssh $*
deploy-toml:
$(foreach NODE, $(NODES), \
cat config.toml.tmpl | ssh $(NODE) sudo tee /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl;)
reboot-nodes:
-$(foreach NODE, $(NODES), \
ssh $(NODE) sudo reboot;)
Deploying and Exposing a k3s Service
With the above configuration (and a fresh image in registry.lan
), setting up a service with images from my private registry is as simple as this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-red-deployment
labels:
app: node-red
spec:
replicas: 1
selector:
matchLabels:
app: node-red
template:
metadata:
labels:
app: node-red
spec:
containers:
- name: node-red
image: registry.lan:5000/node-red:slim # my WIP custom build of Node-RED
imagePullPolicy: Always # grab from the registry every time
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
ports:
- containerPort: 1880
volumeMounts:
- mountPath: /data
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /srv/state/node-red # /srv is a shared NFS mount in my home setup
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: node-red
spec:
selector:
app: node-red
ports:
- protocol: TCP
port: 80
targetPort: 1880
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: master.lan
spec:
rules:
- host: master.lan
http:
paths:
- backend:
serviceName: node-red
servicePort: 80
Caveats
It’s still a bit of a mouthful, though. I find Kubernetes to be a bit overkill, and its manifests do little to assuage me in that regard.
More to the point, stitching together these stanzas is both error-prone and a chore, but I’ll grant that the end result is (marginally) better than docker-compose
for playing around.
Also, things like kubectl --all-namespaces=true get all -o wide
make it plain the CLI needs better ergonomics–I can never remember half the options. But I digress.
Obviously, that Ingress
is a bit too simple. As it happens master.lan
is the master node, which is the only cluster node that can talk to the outside directly, and since my home router does not allow me to define custom DNS entries (no CNAME
s, so I only get an A
record for each IP) that limits my options a bit.
But traefik
can do a lot more than just mapping hostnames, so I intend to make a few more tweaks.