Wherein I will explain how to use pass and GnuPG to secure k8s credentials.

Since I migrated my HashiCorp Vault instance into my Kubernetes cluster, I started to feel a bit uncomfortable with the Kubernetes access credentials just sitting in the ~/.kube/config file in plain text. Anyone who somehow gets access to my Command & Control host would be able to access them and do whatever they like with the Kubernetes cluster, including the Vault deployment containing a lot of my secrets.

So I asked around on the Fediverse, and Sheogorath@shivering-isles.com came back with two interesting blog posts. The first one, using OIDC, was interesting, but it would require some additional infrastructure that would need to be up whenever I wanted to do something in Kubernetes. Which would have also meant that I couldn’t run that infrastructure in Kubernetes itself.

But the second post was very interesting, showing how to use pass to store the k8s credentials.

I’m already using pass as my password manager on my desktop and phone, so this sounded like an excellent idea.

In short, pass is a pretty simple bash script which uses GnuPG do encrypt and decrypt files containing passwords, or really any data at all, sitting in my home directory. The initial setup is a little bit more involved due to needing GnuPG keys, but afterwards it’s pretty easy to use. Its main interface is a command line script with the possibility of entering new passwords and showing existing ones, as well as moving passwords around. But there’s also an Android app and a Firefox browser extension which both work very nicely.

There was only one problem: I didn’t want to set up a whole different set of GnuPG keys to use on my Command & Control host. After some searching, I figured out that gpg-agent has some forwarding options, similar to ssh-agent. And I already had gpg-agent running on my desktop.

Using a remote gpg-agent for access to the secret key also has an additional advantage: Even if an attacker can get into my Command & Control server, the key necessary to decrypt the Kubernetes credentials is not physically present on the machine. One more hurdle for an attacker to overcome.

Setting up GnuPG on the Command & Control machine

The first thing to do is to set up the public key of the secret key that will later be used by pass to encrypt the Kubernetes credentials. Note that only the public key is needed here - the private key stays on the original machine, in my case my desktop computer.

First, list the keys on the original host:

gpg --list-public-keys

pub   rsa4096 2022-06-23 [SC]
      3BBC8F8D9E7CB515338C6F0B34BBBD3D676F000F
uid        [ ultimativ ] Foo Bar <mail@example.com>
uid        [ ultimativ ] Baz Bar (Private) <mail2@example.com>
sub   rsa4096 2022-06-23 [E]

[...]

In this output, the important part is the keyhash in the line after the pub line: 3BBC8F8D9E7CB515338C6F0B34BBBD3D676F000F. That’s the identifier for the key.

Next, I needed to transfer the public key over to my Command & Control host:

gpg --export 3BBC8F8D9E7CB515338C6F0B34BBBD3D676F000F | ssh myuser@candchost gpg --import

With that done, I could go ahead and set up the GnuPG agent forwarding. I followed this documentation and did not have any issues.

In short, I added these lines to the SSHD server configuration on the candchost:

Match User myuser
  StreamLocalBindUnlink yes

In addition, I also had to add these lines to my own SSH config for my user on my desktop from where I’m accessing the Command & Control host, at ~/.ssh/config:

Host candchost
  RemoteForward  /run/user/1000/gnupg/S.gpg-agent /run/user/1000/gnupg/S.gpg-agent.extra

As the documentation notes, the following commands can be used. For the second path in the RemoteForward option, which is the local (on my desktop) gpg-agent “extra” socket:

gpgconf --list-dir agent-extra-socket

And then to get the socket on the candchost, for the first argument of RemoteForward:

gpgconf --list-dir agent-socket

This is just the path of the standard GnuPG socket on that host.

And that’s all there was to it. When I reconnected to the candchost via SSH, I was able to use gpg-agent and got access to my remote agent on my desktop.

One last thing to do was to trust the public key transferred to the candchost. This is only possible after the forwarding has been configured, because I didn’t have, and don’t need, a private key to do any trusting with on the candchost.

Trusting a key works like this:

gpg --edit-key 3BBC8F8D9E7CB515338C6F0B34BBBD3D676F000F
Secret key is available.

[...]

gpg> trust
[...]

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

[...]
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg> q

This procedure uses the private key from the gpg-agent, meaning the key from my desktop system, which was a nice confirmation that the forwarding setup worked.

Setup pass

The next step is to setup pass. First, install it:

apt install --no-install-recommends --no-install-suggests pass

The --no-install-suggests and --no-install-recommends flags are very much required, otherwise you’re going to get pieces of X11 installed on an Ubuntu system.

To initialize pass, the init command is used, with the public key’s keyhash used as input:

pass init 3BBC8F8D9E7CB515338C6F0B34BBBD3D676F000F

This creates the password store in the default location at ~/.password-store.

Setup Kubernetes

Following Sheogorath’s blog post, I first extracted the keys from the Kube config file with these commands:

kubectl config view --minify --raw --output 'jsonpath={..user.client-certificate-data}' | base64 -d | sed -e 's/$/\\n/g' | tr -d '\n' > client-cert
kubectl config view --minify --raw --output 'jsonpath={..user.client-key-data}' | base64 -d | sed -e 's/$/\\n/g' | tr -d '\n' > client-key

Then I added the values to an ExecCredential I stored in pass by running this command first:

pass edit k8s/credentials

This will open the editor in the EDITOR environment variable. Then I pasted this into it:

{
  "apiVersion": "client.authentication.k8s.io/v1",
  "kind": "ExecCredential",
  "status": {
    "clientCertificateData": "-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----",
    "clientKeyData": "-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----"
  }
}

I replaced the clientCertificateData with the content of the client-cert file extracted with the previous command and the clientKeyData with the content of the client-key file. Finally, the entire file content should be squashed into a single line of text, and then the editor can be closed.

If everything worked as expected, pass has now stored that file content at ~/.password-store/k8s/credentials, encrypted with the public key given in the pass init command. Try it out by running this command:

pass show k8s/credentials

If you haven’t run any commands which require decryption up to now, a popup should appear from your pinentry program asking you to unlock your GnuPG private key. This will even appear when you’ve previously unlocked that same private key for local use on your desktop machine, as GnuPG treats the local and remote machine as two different instances, for security reasons.

The final step is to adapt the ~/.kube/config file to use the credentials from pass. For that, I opened the file and edited it to look like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <Cluster CA CERT>
    server: https://k8s.example.com:6443
  name: my-kube-cluster
contexts:
- context:
    cluster: my-kube-cluster
    user: my-kube-user
  name: my-kube-user@my-kube-cluster
current-context: my-kube-user@my-kube-cluster
kind: Config
preferences: {}
users:
- name: my-kube-user
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1
      command: pass
      args:
        - show
        - k8s/credentials
      interactiveMode: IfAvailable

The only change necessary is in the users array, where the user: entry for your user should be changed to contain the exec section shown, instead of the client-certificate-data and client-key-data entries.

And with that, kubectl will execute the command pass show k8s/credentials to access the credentials. And this doesn’t just work for kubectl, but I’ve also tested it with the Ansible k8s modules.