The HashiCorp Nomad and Kubernetes logos, connected with an arrow pointing from Nomad to Kubernetes

Nomad to k8s, Part 4: Storage with Ceph Rook

Wherein I talk about the setup of Ceph Rook on my k8s cluster. This is part five of my k8s migration series. The current setup I’ve been running Ceph as my storage layer for quite a while now. In my current Nomad setup, it provides volumes for my jobs as well as S3 for those apps which support it. In addition, most of my Raspberry Pis are diskless, netbooting off of Ceph’s RBD block devices as their root. At first glance, Ceph might look like you’d need an Ops team of at least three people to run it. But after the initial setup, I’ve found it to be very low maintenance. Adding additional disks or entire additional hosts is very low effort. I went through the following stages, with the exact same cluster, without any outages or cluster recreation: ...

January 11, 2024 · 26 min · Michael

Migrating two Ceph OSDs from one physical hosts to another

Over the weekend, I migrated one of the Ceph VMs in my Homelab over to a physical host. This time around, instead of buying a completely new machine, I recycled most of my old 2018 era home server. It’s an old AMD A10-9700E, meaning the 35W TDP variant. I have noted some thoughts on reusing this old machine here. Mounted in the rack, the machine looks like this: Server mounted in the rack, without Ceph OSD disks attached. ...

April 23, 2023 · 7 min · Michael

Reduce, Reuse, Recycle: Reusing my old home server

I had a random thought today, triggered, by all things, by a short training on Reduce, Reuse, Recycle at work. This is the principle of first looking for the potential to not produce anything. Then to look for a new use for something old that has already been manufactured. And only then, as a final step, to recycle the thing. I, and probably many other Homelabbers, have quite a bit of older hardware laying around. Hardware that’s still perfectly functional, but which is either too slow, or doesn’t support newer features etc. For me, that’s only two things, because I was a poor student until relatively recently. 😉. The first one not discussed here is a desktop from 2017 which I replaced in 2019. It is an AMD Ryzen 1700x. Still a powerful machine, but quite honestly: A bit more powerful and power hungry than my new many small and less powerful machines Homelab principle calls for. ...

March 29, 2023 · 5 min · Michael
The Mastodon logo of the tooting elephant

Current Homelab - Ceph Storage

This is the next post in the Current Homelab series, where I give an overview of what my lab is currently looking like. This time, I will be talking about my storage layer, which is mostly made up of Ceph. I chose Ceph around spring 2021, when I decided to go from a baremetal+docker-compose setup to a VM based setup with LXD. At the time, my main storage consisted of a pair of WD Red 4TB disks for my main storage requirements, and a 60GB crucial SATA SSD for my server’s root FS. While going through the LXD docs, I saw that it supported something called “Ceph RBD” for its VM volumes. ...

February 16, 2023 · 15 min · Michael

Ceph MON Migration

In the course of spreading my homelab over a couple more machines, I finally arrived at the Ceph cluster’s MON daemons. These were running on three Ceph VMs on my main x86 server up to now. In this post, I will describe how I moved them to three Raspberry Pis. While the cluster was up the entire time. First, a couple of considerations: MON daemons use on average about 1GB of memory in my cluster My cluster, and most of my services, went down during the migration. So please be cautious if you plan to do your own migration The MON daemons are something of a control plane for Ceph clusters. They hold the MON map of daemons and data locations. Every client which uses the Ceph cluster will use them to access a map of available OSDs to work with. ...

December 26, 2022 · 4 min · Michael