Wherein I describe how I organize Helm charts and other k8s manifests.
I’ve had this post laying around in my draft folder for a long long time. Mostly because I started writing it before I realized how useful it is to write posts very close to when something happens.
The “something happens” in this case is the answer to the question “How to
organize my Helm charts and other k8s manifests?”. I liked Helm fine enough when
I looked at it. It’s pretty nice to get all necessary manifests to run an app,
instead of having to write all of them myself.
But the question then was: How to store which exact Helm charts I have
installed, and in which version? And how/where to store the values.yaml
files?
And then, what about random manifests, like additional PriorityClasses?
The solution that was pointed out to me on the Fediverse: Helmfile. It’s a piece of software that allows reading a number of Helm charts to be installed and deploying them onto a cluster. It does not re-implement Helm, but simply calls a previously installed Helm binary.
All of the configuration for Helmfile is stored in a local Yaml file. A
good example for what that config looks like is my CloudNativePG
setup. Helmfile by default reads the config from a file named helmfile.yaml
in the current working dir. My helmfile.yaml
, stripped down only to the
CNPG setup, looks like this:
repositories:
- name: cloud-native-pg
url: https://cloudnative-pg.github.io/charts
releases:
- name: cnpg-operator
chart: cloud-native-pg/cloudnative-pg
version: v0.21.2
namespace: cnpg-operator
values:
- ./cnpg-operator/hl-values.yaml.gotmpl
And the hl-vaues.yaml.gotmpl
is then just the values.yaml
file for the
CNPG Helm chart. With one additional wrinkle: Helmfile can do templating, on the
values.yaml
file. Which is pretty cool. Just one example of how I’m using this
is my external-secrets addon values.yaml
file:
caBundle: |
{{- exec "curl" (list "https://vault.example.com:8200/v1/my-ca/ca/pem") | nindent 2 }}
Then in turn, I’m writing that to a Secret:
apiVersion: v1
kind: Secret
metadata:
name: my-ca-cert
stringData:
caCert: |
{{- .Values.caBundle | nindent 6 }}
And the curl command is executed on the machine where Helmfile is executed. Which is particularly nice when you’re fetching some Secrets via this mechanism, because it allows you to use local credentials that only exist on that single machine.
Once you’ve entered a release into the Helmfile, it can be deployed with a command like this:
helmfile apply --selector name=cnpg-operator
This will automatically update all repositories and then run helm upgrade
.
Very helpfully, it will also output the diff between the new release and what’s
currently deployed on the cluster.
Besides working with Helm charts directly, you can also just throw a couple of
manifests into a directory and deploy it the same way. I’m doing this for my
own priority classes for example. I just have them in a directory hl-common/
:
ls hl-common/
prio-hl-critical.yaml prio-hl-external.yaml
Helmfile will then use Chartify to turn those loose files into an ad-hoc chart and deploy it.
The release[].values[]
list is also a pretty useful feature. It allows setting
Helm chart values right in the Helmfile instead of a separate values.yaml
.
I don’t use this too much, as I like having all configs neatly in one file. But
I like using this approach in one instance, namely for appVersion
-like values
on Helm charts I wrote myself. Here’s an example from my Audiobookshelf entry:
- name: audiobookshelf
chart: ./audiobookshelf
namespace: audiobookshelf
values:
- appVersion: "2.23.0"
The fact that I have the appVersion in the Helmfile directly makes it a lot more
convenient when I do my regular service update rounds. Unless something deeper
changed, I just need to have my Helmfile open during Service Upgrade Friday and
either update the chart version or the appVersion
right there, without having
to switch between all of the values.yaml
or Chart.yaml
files.
For my standard approach, I’m currently working with two release entries when using a 3rd party chart. Let’s look at my Forgejo deployment as an example:
repositories:
- name: forgejo
url: code.forgejo.org/forgejo-helm
oci: true
releases:
- name: forgejo
chart: forgejo/forgejo
version: 12.5.1
namespace: forgejo
values:
- ./forgejo/hl-values.yaml.gotmpl
- name: forgejo-addons
namespace: forgejo
chart: ./forgejo-addons
In this approach, the forgejo/hl-values.yaml.gotmpl
file is the values.yaml
file for the Forgejo chart. But, in most instances, 3rd party charts don’t
contain everything I need. One example which comes up almost every single time
are additional ExternalSecret manifests for credentials, or ObjectBucketClaims
for S3 buckets in my Ceph cluster. And those Yaml files need to go somewhere.
And that’s what the $chartname-addon
chart is for. It’s a normal Helm chart,
including Chart.yaml
and templates/
directory. It also gets its own values.yaml
file. It gets deployed into the same Namespace as the primary chart.
I also trialed a different approach with some of my earliest charts. For those,
I created a “parent” chart, which contained the Chart.yaml
and any additional
manifests on top of the 3rd party chart. Then said 3rd party chart got added
as a dependency. But I moved away from that approach, as I found the separation
between 3rd party chart and my own manifests in the $chartname-addons
approach
more appealing. There was also the
fact that I couldn’t just update the version of the 3rd party chart and then
deploy - Helm would always error out due to the Chart.lock
file being
outdated. I moved away from this model completely.
Why not GitOps?
So the obvious question might be: Why not employ GitOps like Argo or Flux? Mostly: Time. 😁 I’m not adverse to adding additional complexity to my Homelab just for the fun of it. But a GitOps tool should have its own management cluster, as it wouldn’t make much sense to me to have e.g. ArgoCD running in the same cluster that it’s managing. So I skipped this option when I initially looked for how I wanted to manage it all.
There’s also the additional hassle of “Okay, and then where will I store the repo and execute the automation?”. I have a Forgejo instance and Woodpecker as CI, but both of those are deployed in my main cluster. So they would be controlled by ArgoCD - which they would also be hosting. But on the other hand, there’s also the challenge to come up with something reasonably small that can serve ArgoCD without being too much of a hassle.
Finally, there’s also my current workflow: I generally work on a thing until it works properly, and then it gets a commit in the Homelab repo. It would feel a bit weird to make a commit for every thing I change, for no other reason than that I need said commit to trigger a new deployment. I’m used to this approach from work, but there the CI triggers hundreds upon hundreds of jobs and tens of thousands of tests. It is literally impossible to run the software on our developer machines. But here? Making a commit for every change, pushing it just to make a test deploy - it just feels a bit much?
All of the above being said - I’d really like to hear what those of you who do run GitOps tools to manage your cluster get out of it. What advantages does it have for you? And what’s your workflow? Do you perhaps always work with Helm locally, and then let Argo do it’s thing once everything already works? Ping me on the Fediverse. I’m genuinely curious. And quite frankly, I want to be convinced - one more project for the Homelab pile. 😁
Finale
And that’s it already for this one. I’ve had it sitting in draft state for way too long.
The next post will likely be on the setup of the tinkerbell lab, as I’m done with that now and have already deployed tinkerbell - but it’s not working properly yet.