Kubernetes Cert Renewal and Monitoring

Wherein I let my kubectl certs expire and implement some monitoring. A couple of days ago, I was getting through my list of small maintenance tasks in my Kubernetes cluster. Stuff like checking the resource consumption of new deployments and adapting the resource limits. And in the middle of it, one of my kubectl invocations was greeted by this message: error: You must be logged in to the server (Unauthorized) So I had a look at my kubectl credentials. For those who don’t know, kubectl authenticates to the cluster with a client TLS cert by default. I had just copied the admin.conf config file kubeadm helpfully creates during cluster setup. I didn’t really see any reason to set up anything more elaborate, considering that I’m the only admin in the cluster. ...

December 7, 2025 · 10 min · Michael
A screenshot of a Grafana dashboard. It shows a number of stats metrics at the top, for example the number of users and buckets and the total bytes send in the interval. Below that, there are a number of time series panels, like number of operations over time, bytes send or bytes received by bucket. I will describe each individual panel and its content in detail in the main post.

Gathering Metrics from Ceph RGW S3

Wherein I set up some Prometheus metrics gathering from Ceph’s S3 RGW and build a dashboard to show the data. I like metrics. And dashboards. And plots. And one of the things I’ve been missing up to now was data from Ceph’s RadosGateway. That’s the Ceph daemon which provides an S3 (and Swift) compatible API for Ceph clusters. While Rook, the tool I’m using to deploy Ceph in my k8s cluster, already wires up Ceph’s own exporters to be scraped by a Prometheus Operator, that does not include S3 data. My main interest here is the development of bucket sizes over time, so I can see early when something is misconfigured. Up to now, the only indicator I had was the size of the pool backing the RadosGW, which currently stands at 1.42 TB, which makes it the second-largest pool in my cluster. ...

October 10, 2025 · 15 min · Michael

Updating CloudNativePG Postgres Images

In the interest of paying down a bit of technical debt in the Homelab, I recently started to update the CloudNativePG Postgres images to their new variants. Where before, the Postgres operand images (see the GitHub repo) were based on the official Postgres containers, they’re now based on Debian and the Debian Postgres packages. With this switch, instead of just having one image per Postgres version, there are now a few variants: ...

October 1, 2025 · 6 min · Michael

Replacing a Broken HDD in my Ceph Cluster

Back in July, I was greeted by this error on my Ceph dashboard while visiting family: A Ceph error you generally don’t want to see while you’re 400 km away from your Homelab. This error meant that during the nightly scrub, Ceph detected an error that was not trivially resolvable. ...

September 29, 2025 · 14 min · Michael

Updating my Kubeadm k8s Cluster from 1.30 to 1.33

Wherein I talk about updating my kubeadm Kubernetes cluster from 1.30 to 1.33 using Ansible. I’ve been a bit lax on my Kubernetes cluster updates, and I was still running Kubernetes v1.30. I’m also currently on a trip to fix a number of the smaller tasks in my Homelab, paying down a bit of technical debt before tackling the next big projects. I already did one update, from my initial Kubernetes 1.29 to 1.30 in the past, using an Ansible playbook I wrote to codify the kubeadm upgrade procedure. But I never wrote a proper post about it, which I’m now rectifying. ...

September 21, 2025 · 17 min · Michael