In my $dayjob, I’m a build engineer in the CI team of a large company. So I’m reasonably confident that this is going to be only the first post in a long series on the CI setup for my Smoking project.
I like CIs and the automated testing they come with. I think it was one of the better ideas the tech industry has come up with. I’m seeing its benefit every day at work. So I also have CIs for most of my private projects.
Over the past week, I’ve been writing the first lines of code for my Smokeweb project. Just some general plumbing and scaffolding work, plus logging setup and command line flags. With the first code established, the next task was the introduction of a Makefile for the project, as well as a CI to automatically test it all.
For all matters of CI, from project CIs to Docker image builds for my Homelab, I’ve got a WoodpeckerCI instance running locally, connected to my Forgejo instance. If you’d like to read more about the setup, see here.
After creating the first build job, I was a bit shocked about how long it ran: Two minutes is a bit long for a build of a few hundred lines of code.
The project really isn’t large yet, perhaps a couple hundred lines of code. Two minutes, even on a Raspberry Pi 4, seems a tad long. Looking at the logs, it turned out that the long duration wasn’t due to the build of my project itself, but rather all of the dependencies it needs. That makes a lot more sense.
Researching a bit, I came across two things: Go build and test caching and the Go module cache. The former caches build results and test results, while the latter caches downloaded module sources.
I decided I wanted both in my CI, so the first thing I needed was a place to put the caches, where they would persist between pipeline runs. For this, Woodpecker allows mounting additional volumes. These are separate from the volume Woodpecker automatically creates for every Workflow, which is only shared between that Workflow’s steps. That volume is deleted after the Workflow finishes. With the k8s runner I’m using, both the Workflow volume and the additional volumes can be configured as PersistentVolumeClaims. But while storing the cache on a Workflow’s volume would probably improve the runtime a bit already once I add more steps, each Workflow run would have to still start from scratch. To avoid this, I’ve created an additional PVC like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gocache-volume
labels:
{{- range $label, $value := .Values.commonLabels }}
{{ $label }}: {{ $value | quote }}
{{- end }}
spec:
storageClassName: cephfs-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I started out with a 5 GB volume, as my local build cache (at ~/.cache/go-build
by default) is about 256 MB at the moment, and my module cache (at ~/go/pkg) is
at 752 MB. That should give me some headroom. I’m also using my CephFS-based
StorageClass for the PVC, as this allows me to mount the cache to multiple
Pods, e.g. if I ever decide to separate the pipeline into multiple Workflows.
With that done, I set my CI Workflow up like this:
when:
- event: pull_request
variables:
- &golang-build-cache /ci-go-cache/build-cache
- &golang-mod-cache /ci-go-cache/mod-cache
- &golang-image golang:1.25.6
steps:
- name: build
image: *golang-image
volumes:
- gocache-volume:/ci-go-cache
environment:
GOCACHE: *golang-build-cache
GOMODCACHE: *golang-mod-cache
commands:
- make build
One very, very important note: The steps[].environment key is a map. Not a list.
Thank me later. 😉
With this configuration, the first run of course again took two minutes, but the
next run (after I had figured out that environment is a map, not a list) took
only 25 seconds:

25 seconds sounds a lot better than 2 minutes.
For good measure, I also introduced another step, which explicitly downloads all module dependencies up-front, so that that’s not done by each individual step once I’ve got more than one, running in parallel:
when:
- event: pull_request
variables:
- &golang-build-cache /ci-go-cache/build-cache
- &golang-mod-cache /ci-go-cache/mod-cache
- &golang-image golang:1.25.6
steps:
- name: prepare mod cache
image: *golang-image
volumes:
- gocache-volume:/ci-go-cache
environment:
GOMODCACHE: *golang-mod-cache
commands:
- go mod download -x
- name: build
image: *golang-image
volumes:
- gocache-volume:/ci-go-cache
environment:
GOCACHE: *golang-build-cache
GOMODCACHE: *golang-mod-cache
depends_on:
- prepare mod cache
commands:
- make build
Note how the build step now depends on the new prepare mod cache step, which
runs go mod download -x to download the external dependencies of my module.
Adding the depends_on here also has the effect of enabling parallelism.
My final pipeline looks like this for now:
when:
- event: pull_request
variables:
- &golang-build-cache /ci-go-cache/build-cache
- &golang-mod-cache /ci-go-cache/mod-cache
- &golang-image golang:1.25.6
steps:
- name: prepare mod cache
image: *golang-image
volumes:
- gocache-volume:/ci-go-cache
environment:
GOMODCACHE: *golang-mod-cache
commands:
- go mod download -x
- name: build
image: *golang-image
volumes:
- gocache-volume:/ci-go-cache
environment:
GOCACHE: *golang-build-cache
GOMODCACHE: *golang-mod-cache
depends_on:
- prepare mod cache
commands:
- make build
- name: UTs
image: *golang-image
volumes:
- gocache-volume:/ci-go-cache
environment:
GOCACHE: *golang-build-cache
GOMODCACHE: *golang-mod-cache
depends_on:
- prepare mod cache
commands:
- make ut
- name: Linters
image: *golang-image
volumes:
- gocache-volume:/ci-go-cache
environment:
GOCACHE: *golang-build-cache
GOMODCACHE: *golang-mod-cache
depends_on:
- prepare mod cache
commands:
- make fmt vet modules/tidy-check
- name: Golang CI
image: golangci/golangci-lint:v2.11.3
volumes:
- gocache-volume:/ci-go-cache
environment:
GOCACHE: *golang-build-cache
GOMODCACHE: *golang-mod-cache
depends_on:
- prepare mod cache
commands:
- golangci-lint run
Overall, this pipeline runs for about 53 seconds:

All of the steps safe for ‘clone’ and ‘prepare mod cache’ ran in parallel.
One last thing still missing here is the cleanup of the caches. Those 5 GB will likely keep me for quite a while, but still: It needs proper cleanup. I looked around a bit on that as well, but didn’t find any good solution. Seemingly, Golang doesn’t do judicious cleanups of the cache? You can only nuke the entire cache, which I find unfortunate. A task for later.
Before finishing, let’s lighten the mood a bit at my expense. Because you see, even though my code currently only contains a bit of scaffolding and startup implementation, I still managed to get no less than five issues with the first golangci-lint run:
+ golangci-lint run
cmd/init.go:22:15: ST1005: error strings should not be capitalized (staticcheck)
ErrVersion = errors.New("Version flag received")
^
cmd/init.go:53:33: ST1005: error strings should not be capitalized (staticcheck)
return &application.Config{}, fmt.Errorf("Got invalid log type: %s", conf.LogType)
^
cmd/init.go:95:26: ST1005: error strings should not be capitalized (staticcheck)
return slog.LevelInfo, fmt.Errorf("Got invalid debug level: %s", s)
^
cmd/main.go:29:3: SA9003: empty branch (staticcheck)
if err == flag.ErrHelp {
^
cmd/main.go:30:10: SA9003: empty branch (staticcheck)
} else {
^
5 issues:
* staticcheck: 5
Why yes, I’m a bit embarrassed. Especially about those empty branch issues at
the end there. I really did leave an empty if ... else ... in the code after
having transformed it into a switch statement right above the if-else. And then
forgot to remove the empty if-else once I was done. 🤦