The HashiCorp Nomad and Kubernetes logos, connected with an arrow pointing from Nomad to Kubernetes

Nomad to k8s, Part 23: Shutdown of the Baremetal Ceph Cluster

Wherein I migrate the last remaining data off of my baremetal Ceph cluster and shut it down. This is part 24 of my k8s migration series. I set up my baremetal Ceph cluster back in March of 2021, driven by how much I liked the idea of large pools of disk I could use to provide S3 storage, Block devices and a POSIX compatible filesystem. Since then, it has served me rather well, and I’ve been using it to provide S3 buckets and volumes for my Nomad cluster. Given how happy I was with it, I also wanted to continue using it for my Kubernetes cluster. ...

March 29, 2025 · 21 min · Michael
A screenshot of a Grafana time series plot. It shows the time between 23:30 and 09:00 for the throughput of my Ceph cluster. It tops out at almost 100 MB, but is on average more around 65 MB. The high throughput happens between approximately 00:00 and 08:50.

Ceph: My Story of Copying 1.7 TB from one Cluster to Another

A couple of weeks ago, I migrated my Jellyfin instance to my Kubernetes cluster. This involved copying my approximately 1.7 TB worth of media from the baremetal Ceph cluster to the new Rook Ceph cluster. And I’d like to dig a bit into the metrics and try to read them like the entrails of a slain beast during a full moon at the top of a misty mountain. Just this much, the portents don’t look good for one of my HDDs. ...

March 4, 2025 · 17 min · Michael
The HashiCorp Nomad and Kubernetes logos, connected with an arrow pointing from Nomad to Kubernetes

Nomad to k8s, Part 18: Migrating Jellyfin

Wherein I migrate my Jellyfin instance to the k8s cluster. This is part 19 of my k8s migration series. I’m running a Jellyfin instance in my Homelab to play movies and TV shows. I don’t have a very fancy setup, no re-encoding or anything like that. I’m just using Direct Play, as I’m only watching things on my desktop computer. Jellyfin doesn’t have any external dependencies at all, so there’s only the Jellyfin Pod itself to be configured. It also doesn’t have a proper configuration file. Instead, it’s configured through the web UI and a couple of command line options. For that reason, I won’t have any Secrets or ConfigMaps. Instead I’ve just got a PVC with the config and some space for Jellyfin’s cache and another CephFS volume for the media collection. ...

February 20, 2025 · 12 min · Michael
A screenshot of a Grafana visualization titled 'Objects in cluster'. It shows a pretty consistent growth until about October 10th, where the first drop of about 150k objects occurs. Then a far steeper drop follows on October 19th to 21st, straight down from 1.9 million to 1 million. Afterwards, there is regular growth again, but now interspersed with similarly regular drops in the object counts.

Cleaning up my Mastodon Media Cache

I recently randomly wandered onto the Mastodon admin page. What I saw there will shock you. (I’m so sorry about that introduction) That’s perhaps a bit much in the Media storage area for a single user instance. I was pretty sure that I had previously configured Mastodon’s media cache retention to 7 days. Checking up on that, I found that I had remembered correctly. ...

November 27, 2024 · 14 min · Michael

NFS problems with new Ubuntu 22.04 kernel

Yesterday’s Homelab host update did not at all go as intended. I hit a kernel bug in the NFS code. To describe the problem, I need to go into a bit of detail on my setup, so please bear with me. I’ve got a fleet of 8 Raspberry Pi CM4 and a single Udoo x86 II forming the backbone of the compute in my Homelab. All of them do netbooting, with no per-host storage at all. To be able to do host updates, including kernels, the boot files used for netbooting are separated per host, and each host’s files are mounted to that host’s /boot/firmware dir via NFS. It looks something like this: ...

February 17, 2024 · 6 min · Michael