
Tinkerbell Part I: The Plan
A rough overview of my plan for trialing tinkerbell in my Homelab. This is part 1 of my tinkerbell series. I’m planning to trial tinkerbell in my Homelab to improve my baremetal provisioning setup. This first post will be the plan and the reason why I’m doing this. Tinkerbell is a system for provisioning baremetal machines. It is deployed into a Kubernetes cluster and consists of a controller, a DHCP/netboot server, a metadata provider e.g. for cloud-init data, and an in-memory OS for running workflows. The basic idea is that new machines netboot into that in-memory OS and execute workflows configured in tinkerbell to install the actual OS. ...
Gathering SNMP Metrics with the SNMP Exporter
I have been gathering metrics from my DrayTek Vigor 165 modem for a while now, and finally got around to documenting the setup, so now you get to read about it. I’m using the Vigor 165 to connect to the Internet via a Deutsche Telekom 250 Mbit/s VDSL connection. That modem supports SNMP and can provide metrics like the line speed or quality. A couple of years back, I wanted to get that data into my Grafana dashboards. After some searching, I came across the SNMP Exporter. ...
Migrating from Gitea to Forgejo
Wherein I migrate my Gitea instance to Forgejo. The Git forge Gitea is one of the oldest services in my Homelab. I set up the first instance about ten years ago, when a budgetary problem forced me to switch my Homeserver to a Pi 3. And that wasn’t really able to run Gitlab, my previous hosting platform. So Gitea it was. Then I had another Gitlab phase after those budgetary constraints were decisively lifted. And then I returned to Gitea, because Gitlab was really, really annoying me, back in 2021. I have been quite happy with Gitea. It provides me a nice UI for my repos and a convenient place for issues logging, although I’ve never really used that feature too much. A couple of years ago, I also added a CI with Drone, but that’s about all the features I ever needed from a Git forge. ...
Setting up Thanos for Metrics Storage
At the time of writing, I have 328 GiB of Prometheus data. When it all started, I had about 250 GiB. I could stop gathering more data whenever I like. 😅 So I’ve got a lot of Prometheus data. Especially since I started the Kubernetes cluster - or rather, since I started scraping it - I had to regularly increase the size of the storage volume for Prometheus. This might very well be due to my 5 year retention. But part of it, as it will turn out later, was because some of the things I was scraping had a 10s scrape interval configured. ...

Migrating my Kubernetes Control Plane to Raspberry Pi 5
I’ve had problems with the stability of my Kubernetes control plane ever since I migrated it to three Raspberry Pi 4 from their temporary home on a beefy x86 server. I will be going into more detail about the problem first, describe the Pi 5 with NVMe a bit, and then describe the migration itself. The problem I’ve noted in a couple of the last posts that I’ve started seeing instability in my Kubernetes control plane. The main symptom I saw were my HashiCorp Vault Pods going down regularly. This was pretty visible because I have not automated unsealing for Vault, so each time the Pods are restarted, I have to manually enter the unseal passphrase. ...