What's next after the K8s Migration?

Wherein I go over my future plans for the Homelab, now that the k8s migration is finally done. So it’s done. The k8s migration is finally complete, and I can now get started with some other projects. Or, well, I can once I’ve updated my control plane Pis to Pi 5 with NVMe SSDs. But what to do then? As it turns out, I’ve got a very full backlog. I’m decidedly not in danger of boredom. ...

April 29, 2025 · 18 min · Michael
The HashiCorp Nomad and Kubernetes logos, connected with an arrow pointing from Nomad to Kubernetes

Nomad to k8s, Part 25: Control Plane Migration

Wherein I migrate my control plane to the Raspberry Pi 4 nodes it is intended to run on. This is part 26 of my k8s migration series. This one did not remotely go as well as I thought. Initially, I wasn’t even sure that this was going to be worth a blog post. But my own impatience and the slowly aging Pi 4 conspired to ensure I’ve got something to write about. ...

April 9, 2025 · 17 min · Michael
The HashiCorp Nomad and Kubernetes logos, connected with an arrow pointing from Nomad to Kubernetes

Nomad to k8s, Part 23: Shutdown of the Baremetal Ceph Cluster

Wherein I migrate the last remaining data off of my baremetal Ceph cluster and shut it down. This is part 24 of my k8s migration series. I set up my baremetal Ceph cluster back in March of 2021, driven by how much I liked the idea of large pools of disk I could use to provide S3 storage, Block devices and a POSIX compatible filesystem. Since then, it has served me rather well, and I’ve been using it to provide S3 buckets and volumes for my Nomad cluster. Given how happy I was with it, I also wanted to continue using it for my Kubernetes cluster. ...

March 29, 2025 · 21 min · Michael
A screenshot of a Grafana time series plot. It shows the time between 23:30 and 09:00 for the throughput of my Ceph cluster. It tops out at almost 100 MB, but is on average more around 65 MB. The high throughput happens between approximately 00:00 and 08:50.

Ceph: My Story of Copying 1.7 TB from one Cluster to Another

A couple of weeks ago, I migrated my Jellyfin instance to my Kubernetes cluster. This involved copying my approximately 1.7 TB worth of media from the baremetal Ceph cluster to the new Rook Ceph cluster. And I’d like to dig a bit into the metrics and try to read them like the entrails of a slain beast during a full moon at the top of a misty mountain. Just this much, the portents don’t look good for one of my HDDs. ...

March 4, 2025 · 17 min · Michael
The HashiCorp Nomad and Kubernetes logos, connected with an arrow pointing from Nomad to Kubernetes

Nomad to k8s, Part 18: Migrating Jellyfin

Wherein I migrate my Jellyfin instance to the k8s cluster. This is part 19 of my k8s migration series. I’m running a Jellyfin instance in my Homelab to play movies and TV shows. I don’t have a very fancy setup, no re-encoding or anything like that. I’m just using Direct Play, as I’m only watching things on my desktop computer. Jellyfin doesn’t have any external dependencies at all, so there’s only the Jellyfin Pod itself to be configured. It also doesn’t have a proper configuration file. Instead, it’s configured through the web UI and a couple of command line options. For that reason, I won’t have any Secrets or ConfigMaps. Instead I’ve just got a PVC with the config and some space for Jellyfin’s cache and another CephFS volume for the media collection. ...

February 20, 2025 · 12 min · Michael