With release 4.4.0 Mastodon introduced a Prometheus exporter. In this post, I will configure it and show the data it provides.
With the new release, Mastodon provides metrics from Ruby and Sidekiq. I’ve attached examples for both to this post, see here for Ruby and here for Sidekiq.
The information is not actually that interesting, it’s just generic process data. But I did find at least the Sidekiq data worth gathering. It will provide an interesting future look into my usage of Mastodon and perhaps even the activity in the Fediverse (or at least the part I’m connected to) overall.
I’m running Mastodon via the official Helm chart,
so I enabled the metrics exporters via the values.yaml
file like this:
mastodon:
metrics:
statsd:
exporter:
enabled: false
prometheus:
enabled: true
sidekiq:
detailed: true
As I’ve noted above, I didn’t find the Ruby data interesting at all, so I did not enable the detailed data for that.
Enabling the Prometheus exporter adds containers running the exporter to the
Sidekiq and Web Pods. Both listen on port 9394
by default. These ports are not
added to any Services.
To instruct my Prometheus instance to scrape the endpoints, I created a PodMonitor like this:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: sidekiq-metrics
labels:
{{- range $label, $value := .Values.commonLabels }}
{{ $label }}: {{ $value | quote }}
{{- end }}
spec:
selector:
matchLabels:
app.kubernetes.io/component: sidekiq-all-queues
app.kubernetes.io/instance: mastodon
app.kubernetes.io/name: mastodon
homelab/part-of: mastodon
podMetricsEndpoints:
- port: prometheus
path: /metrics
scheme: http
interval: 1m
metricRelabelings:
- sourceLabels:
- "__name__"
action: drop
regex: collector_.*
- sourceLabels:
- "__name__"
action: drop
regex: heap_.*
- sourceLabels:
- "__name__"
action: drop
regex: rss
- sourceLabels:
- "__name__"
action: drop
regex: malloc_increase_bytes_limit
- sourceLabels:
- "__name__"
action: drop
regex: oldmalloc_increase_bytes_limit
- sourceLabels:
- "__name__"
action: drop
regex: major_gc_ops_total
- sourceLabels:
- "__name__"
action: drop
regex: minor_gc_ops_total
- sourceLabels:
- "__name__"
action: drop
regex: allocated_objects_total
- sourceLabels:
- "__name__"
action: drop
regex: sidekiq_job_duration_seconds.*
- sourceLabels:
- "__name__"
action: drop
regex: active_record_connection_pool.*
Nothing really special about it, besides perhaps dropping a couple of metrics I did not find too interesting at ingestion.
One note: If you’ve got network policies in use, make sure that your Prometheus instance can actually reach the Mastodon Pods.
Next I went to my Grafana instance and created a few panels in a fresh dashboard to show the interesting data. I created a couple of stats panels first:

The overview stats panels in my Mastodon dashboard.
Then I’ve also got two time series panels, starting with the total jobs by type: The current jobs running.
Next, I’ve also got a plot for the failed jobs: Failed jobs during the same time frame.
I would have wished for a bit more info, to be honest. At least the general instance information available in Mastodon’s admin dashboard would have been nice.
But this is enough for now, and it’s going to be interesting to see how the daily jobs develop in the future.