Wherein I write down things that don’t feel like they should be their own post.

My blogging notes are starting to really fill up with small topics I’d like to write about, but which don’t feel like they warrant their own post. On the other hand, they also don’t feel ephemeral enough to just be a Fediverse post. So I decided to introduce the Sammelsurium, which is the German word for a random collection of things.

Setting up autocomplete for a shell alias

Way back when I started my k8s experiments, I made the reasonable decision to set up k as a bash alias for kubectl. Over the last 16 or so months that must have saved me quite a lot of typing. The alias is as simple as they come:

alias k="kubectl"

There’s also a pretty extensive autocomplete. I’ve deployed it into my bashrc by first writing it out into a file:

kubectl completion bash > ~/.kube/kubectl-comp

Then I source that file in my bashrc:

source ~/.kube/kubectl-comp

So far, so nice. But the problem is now: This only works for kubectl, not for my k alias!

To make it work for my alias as well, I had to add these lines to my bashrc:

if [[ $(type -t compopt) = "builtin" ]]; then
    complete -o default -F __start_kubectl k
else
    complete -o default -o nospace -F __start_kubectl k
fi

Perhaps similarly useful, I’ve also set up an alias for the Rook Ceph kubectl plugin. This plugin needs to be told the cluster and operator namespaces. As I’ve only got one Rook Ceph cluster in my setup, those values never change, so it doesn’t make any sense to type them again and again. My alias looks like this:

alias kceph="kubectl rook-ceph --operator-namespace rook-ceph -n rook-cluster"

Ceph telemetry

Like to many projects these days, Ceph also has some telemetry function. It is opt-in, and the only bad thing I could say about it is that the project asks you to enable it from time to time. I’ve got it enabled. I’ve always felt that data sharing is a good way to help out a project.

But Ceph goes one step further. They also share some of the data in public dashboards you can find here.

The dashboard shows some general information, like the fact that there’s about 3.5k Ceph clusters with telemetry enabled, which have a capacity of 1.73 EiB. It also shows that an average cluster has about 16 - 32 TiB of storage and has a mere 4 OSDs. I’m wondering whether that’s skewed by e.g. Proxmox clusters?

Showing information from TLS certs on the command line

This one always comes up when I’m updating my Let’s Encrypt certs. I just want to have a quick look at my webservers to make sure they’ve all updated to the new certificate correctly.

The command, using my blog as an example, looks like this:

$ openssl s_client -connect blog.mei-home.net:443 2>&1 | openssl x509 -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            05:22:36:ee:6e:19:df:56:0a:ee:66:44:a3:fc:a3:00:8c:d7
        Signature Algorithm: ecdsa-with-SHA384
        Issuer: C=US, O=Let's Encrypt, CN=E5
        Validity
            Not Before: Apr  7 08:53:40 2025 GMT
            Not After : Jul  6 08:53:39 2025 GMT
        Subject: CN=mei-home.net
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (384 bit)
                pub:
                    04:6c:97:b7:bb:b1:26:cf:2f:c9:c8:14:65:a2:46:
                    b6:4c:ab:a4:ea:47:57:29:cd:d4:3b:de:11:43:5d:
                    69:a7:9f:be:50:50:81:41:b6:f6:97:a7:35:3a:13:
                    4b:d1:a1:31:84:d0:e6:62:82:47:1f:97:d7:5d:ef:
                    05:1d:5e:42:0d:f1:19:17:9f:59:d0:89:a3:ca:78:
                    8a:d7:ed:a2:9f:d7:9c:32:15:92:f8:6d:ef:5a:7d:
                    20:07:b8:c3:67:30:31
                ASN1 OID: secp384r1
                NIST CURVE: P-384
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage:
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier:
                43:C0:F9:C3:C5:10:E4:F0:A5:68:AC:82:8E:7E:B4:D7:74:90:46:29
            X509v3 Authority Key Identifier:
                9F:2B:5F:CF:3C:21:4F:9D:04:B7:ED:2B:2C:C4:C6:70:8B:D2:D7:0D
            Authority Information Access:
                OCSP - URI:http://e5.o.lencr.org
                CA Issuers - URI:http://e5.i.lencr.org/
            X509v3 Subject Alternative Name:
                DNS:*.mei-home.net, DNS:mei-home.net
            X509v3 Certificate Policies:
                Policy: 2.23.140.1.2.1
            X509v3 CRL Distribution Points:
                Full Name:
                  URI:http://e5.c.lencr.org/88.crl

            CT Precertificate SCTs:
                Signed Certificate Timestamp:
                    Version   : v1 (0x0)
                    Log ID    : CC:FB:0F:6A:85:71:09:65:FE:95:9B:53:CE:E9:B2:7C:
                                22:E9:85:5C:0D:97:8D:B6:A9:7E:54:C0:FE:4C:0D:B0
                    Timestamp : Apr  7 09:52:10.214 2025 GMT
                    Extensions: none
                    Signature : ecdsa-with-SHA256
                                30:45:02:20:47:88:12:84:60:3F:FB:62:7F:4C:A8:05:
                                23:18:C5:25:66:1F:9A:13:58:8E:AD:94:DB:34:9E:C9:
                                9D:F8:A2:07:02:21:00:83:76:32:B0:F7:34:11:B1:BB:
                                EC:6A:2D:8C:B1:47:E6:93:DC:FE:31:3E:53:AE:67:47:
                                08:B4:A3:38:5A:56:A0
                Signed Certificate Timestamp:
                    Version   : v1 (0x0)
                    Log ID    : DD:DC:CA:34:95:D7:E1:16:05:E7:95:32:FA:C7:9F:F8:
                                3D:1C:50:DF:DB:00:3A:14:12:76:0A:2C:AC:BB:C8:2A
                    Timestamp : Apr  7 09:52:12.253 2025 GMT
                    Extensions: none
                    Signature : ecdsa-with-SHA256
                                30:44:02:20:03:29:9E:A8:29:43:3B:A9:44:EE:DB:60:
                                70:E0:4A:9C:DB:DD:0C:9F:20:7D:7F:FB:DA:AF:90:FD:
                                4E:EB:59:31:02:20:5B:84:2C:BC:05:A7:53:A4:EB:04:
                                59:A4:7B:77:0E:5A:90:39:1B:68:BF:48:71:14:E5:16:
                                72:42:89:55:76:95
    Signature Algorithm: ecdsa-with-SHA384
    Signature Value:
        30:65:02:31:00:87:c9:85:13:1f:f7:b1:0a:d0:2d:0c:56:7f:
        bd:1e:f5:51:2b:31:59:62:03:ee:bf:ca:fc:3f:09:b0:e4:e2:
        74:80:aa:16:ac:1b:bf:17:38:3a:3a:22:6a:70:4c:57:e3:02:
        30:1e:73:29:b1:e4:c4:43:a5:d8:bd:8f:81:a6:23:c6:10:b3:
        cc:b0:3f:31:8b:86:f3:51:76:c8:85:b4:37:a2:be:96:e0:83:
        61:65:cb:b8:6a:cd:d8:56:d7:7b:f4:a4:83

Excluding containers from pull-through cache in cri-o

I wrote about migrating to Harbor during my k8s migration, and about the fact that cri-o supports pull-through caches for any registry in the past.

I’d like to provide a short update on the setup, namely on pull-through cache setup. Because there’s one tinsy problem with setting Harbor up as a generic pull-through cache: Harbor itself. What if an important Harbor component gets migrated during a node restart? And the Harbor images aren’t available on the new node - but Harbor is already down, so the cache doesn’t work?

Well, first of all cri-o of course still works. If the cache doesn’t work, the original address is tried. But this seems to depend on what exactly doesn’t work. Namely, I ran into issues with my Dockerhub mirror, which runs through a Caddy proxy. I described the reason in the blog post I linked above.

Well, luckily the cri-o team thought of that, and you can prevent specific repositories from using the cache altogether. So now my config for DockerHub looks like this:

[[registry]]
prefix = "docker.io"
insecure = false
blocked = false
location = "docker.io"
[[registry.mirror]]
location = internal.example.com/dockerhub-cache"
[[registry]]
prefix = "docker.io/goharbor"
location = "docker.io/goharbor"
[[registry]]
prefix = "docker.io/caddy"
location = "docker.io/caddy"

This configuration redirects all DockerHub image pulls to my internal Harbor instance by default. But specifically for Harbor’s own images and for Caddy, the redirection is overwritten to point to DockerHub again. With this config I can be sure that Harbor itself can always pull its own images.

And that’s already for my first Sammelsurium post. I think this a good format for providing some short information I’d like to put somewhere more permanent, but don’t want to write a full blog post about.