Wherein I talk about the setup of Ceph Rook on my k8s cluster.
This is part five of my k8s migration series.
The current setup I’ve been running Ceph as my storage layer for quite a while now. In my current Nomad setup, it provides volumes for my jobs as well as S3 for those apps which support it. In addition, most of my Raspberry Pis are diskless, netbooting off of Ceph’s RBD block devices as their root. At first glance, Ceph might look like you’d need an Ops team of at least three people to run it. But after the initial setup, I’ve found it to be very low maintenance. Adding additional disks or entire additional hosts is very low effort. I went through the following stages, with the exact same cluster, without any outages or cluster recreation:
...