Wherein I migrate my Gitea instance to Forgejo.
The Git forge Gitea is one of the oldest services in my Homelab. I set up the first instance about ten years ago, when a budgetary problem forced me to switch my Homeserver to a Pi 3. And that wasn’t really able to run Gitlab, my previous hosting platform. So Gitea it was. Then I had another Gitlab phase after those budgetary constraints were decisively lifted. And then I returned to Gitea, because Gitlab was really, really annoying me, back in 2021. I have been quite happy with Gitea. It provides me a nice UI for my repos and a convenient place for issues logging, although I’ve never really used that feature too much. A couple of years ago, I also added a CI with Drone, but that’s about all the features I ever needed from a Git forge.
Save for statistics. I really like statistics. That was my one gripe about
the switch away from Gitlab - they’ve got nice Git statistics. But Gitea at
least has an activity heatmap: My Gitea activity heatmap.
But today I want to talk about my switch to Forgejo, which started out as a soft fork of Gitea, but has become a hard fork at this point. Why? Well, mostly smell? I was pretty surprised when Gitea announced that they were going a bit more in the corporate direction. Sure, that’s fine with me, and we all need to make money somehow. But after the introduction of Gitea Cloud, their SaaS offering, it felt just a bit too corporate for my tastes. And then there was Forgejo, which has a pretty open, community-lead process. It’s also got its trademark and domain owned by Codeberg e.V., a German non-profit that’s running the Codeberg Git hosting platform - based on Forgejo. That just has a nice ring to it. In addition, the main development for Federation of Git forges is happening in Forgejo. And while my Forgejo instance is not public right now, I might very well make it public once federation arrives.
Before I get to the configuration, one typical Michael thing: I had originally planned to make the switch as part of migrating Gitea to k8s. I sat down to start that move on a nice Saturday morning in February. Then I searched around a bit for information on migrating a Gitea instance to Forgejo. And one of the first hits was this Forgejo release post. It announced that Gitea 1.22 was the last version where a switch was possible by just changing the container images. And now guess what I had done the previous evening…
So migrating all repos by hand it was.
The setup
I will not say too much about the Forgejo setup itself. It is very similar to
my Gitea setup. In fact, I started by just copying all the manifests and Helm
values.yaml
file from my Gitea setup. If you’re interested in an in-depth
description, have a look at my post on migrating Gitea to k8s.
But for completeness’ sake, here is my values.yaml
file for the Forgejo Helm chart
in version 12.5.0:
replicaCount: 1
image:
rootsless: true
strategy:
type: Recreate
containerSecurityContext:
capabilities:
add:
- SYS_CHROOT
service:
ssh:
type: LoadBalancer
port: 2222
externalTrafficPolicy: Local
annotations:
external-dns.alpha.kubernetes.io/hostname: git.example.com
labels:
homelab/public-service: "true"
ingress:
enabled: true
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: secureweb
hosts:
- host: forgejo.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- forgejo.example.com
httpRoute:
enabled: false
route:
enabled: false
resources:
requests:
cpu: 800m
limits:
memory: 2000Mi
persistence:
enabled: true
create: true
mount: true
size: 15Gi
accessModes:
- ReadWriteOnce
storageClass: rbd-bulk
signing:
enabled: false
gitea:
admin:
username: "forgejo-admin"
password: "12345"
passwordMode: initialOnlyRequireReset
metrics:
enabled: false
oauth:
- name: "Keycloak"
provider: "openidConnect"
existingSecret: oidc-credentials
autoDiscoverUrl: "https://login.example.com/realms/example/.well-known/openid-configuration"
config:
APP_NAME: "My Forgejo"
RUN_MODE: "prod"
server:
SSH_DOMAIN: "git.example.com"
SSH_PORT: 2222
log:
LEVEL: Info
database:
DB_TYPE: "postgres"
LOG_SQL: false
oauth2:
ENABLED: true
service:
DISABLE_REGISTRATION: true
REQUIRE_SIGNIN_VIEW: true
DEFAULT_KEEP_EMAIL_PRIVATE: true
DEFAULT_ALLOW_CREATE_ORGANIZATION: true
DEFAULT_ORG_VISIBILITY: true
DEFAULT_ORG_MEMBER_VISIBLE: false
DEFAULT_ENABLE_TIMETRACKING: true
SHOW_REGISTRATION_BUTTON: false
repository:
SCRIPT_TYPE: bash
DEFAULT_PRIVATE: private
DEFAULT_BRANCH: main
queue:
TYPE: redis
CONN_STR: "addr=redis.redis.svc.cluster.local:6379,db=1"
WORKERS: 1
BOOST_WORKERS: 5
admin:
DEFAULT_EMAIL_NOTIFICATIONS: disabled
openid:
ENABLE_OPENID_SIGNIN: false
webhook:
ALLOWED_HOST_LIST: private
mailer:
ENABLED: true
SUBJECT_PREFIX: "[Forgejo]"
SMTP_ADDR: mail.example.com
SMTP_PORT: "465"
FROM: "forgejo@mei-home.net"
USER: "apps@mei-home.net"
cache:
ADAPTER: "redis"
INTERVAL: 60
HOST: "network=tcp,addr=redis.redis.svc.cluster.local:6379,db=1,pool_size=100,idle_timeout=180"
ITEM_TTL: 7d
session:
PROVIDER: redis
PROVIDER_CONFIG: network=tcp,addr=redis.redis.svc.cluster.local:6379,db=1,pool_size=100,idle_timeout=180
time:
DEFAULT_UI_LOCATION: "Europe/Berlin"
cron:
ENABLED: true
RUN_AT_START: false
cron.archive_cleanup:
ENABLED: true
RUN_AT_START: false
SCHEDULE: "@every 24h"
cron.update_mirrors:
ENABLED: false
RUN_AT_START: false
cron.repo_health_check:
ENABLED: true
RUN_AT_START: false
SCHEDULE: "0 30 5 * * *"
TIMEOUT: "5m"
cron.check_repo_stats:
ENABLED: true
RUN_AT_START: true
SCHEDULE: "0 0 5 * * *"
cron.update_migration_poster_id:
ENABLED: true
RUN_AT_START: true
SCHEDULE: "@every 24h"
cron.sync_external_users:
ENABLED: true
RUN_AT_START: false
SCHEDULE: "@every 24h"
UPDATE_EXISTING: true
cron.deleted_branches_cleanup:
ENABLED: true
RUN_AT_START: true
SCHEDULE: "@every 24h"
migrations:
ALLOW_LOCALNETWORKS: true
packages:
ENABLED: false
storage:
STORAGE_TYPE: minio
MINIO_ENDPOINT: rook-ceph-rgw-rgw-bulk.rook-cluster.svc:80
MINIO_LOCATION: ""
MINIO_USE_SSL: false
actions:
ENABLED: false
additionalConfigFromEnvs:
- name: FORGEJO__DATABASE__HOST
valueFrom:
secretKeyRef:
name: forgejo-pg-cluster-app
key: host
- name: FORGEJO__DATABASE__NAME
valueFrom:
secretKeyRef:
name: forgejo-pg-cluster-app
key: dbname
- name: FORGEJO__DATABASE__USER
valueFrom:
secretKeyRef:
name: forgejo-pg-cluster-app
key: user
- name: FORGEJO__DATABASE__PASSWD
valueFrom:
secretKeyRef:
name: forgejo-pg-cluster-app
key: password
- name: FORGEJO__MAILER__PASSWD
valueFrom:
secretKeyRef:
name: mail-pw
key: pw
- name: FORGEJO__STORAGE__MINIO_BUCKET
valueFrom:
configMapKeyRef:
name: forgejo-bucket
key: BUCKET_NAME
- name: FORGEJO__STORAGE__MINIO_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: forgejo-bucket
key: AWS_ACCESS_KEY_ID
- name: FORGEJO__STORAGE__MINIO_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: forgejo-bucket
key: AWS_SECRET_ACCESS_KEY
redis-cluster:
enabled: false
redis:
enabled: false
postgresql-ha:
enabled: false
postgresql:
enabled: false
When migrating from Gitea to Forgejo by doing a copy+paste of the values.yaml
for their respective Helm charts, there are a few differences to be taken into
account.
First, all of the environment variables should be prefixed with FORGEJO
instead of GITEA
. Another one is the way Actions, the CI system, is disabled.
I’m running Woodpecker as my CI, so I didn’t need
Actions. In the Gitea Helm chart, Actions is disabled like this:
actions:
enabled: false
In Forgejo, there is no specific Helm value to do so, instead the Forgejo config option needs to be set:
gitea:
actions:
ENABLED: false
I’ve also switched my approach to the admin account config. In Gitea, I already had an admin account, because I was only migrating from the Nomad setup to k8s. But for Forgejo, I was creating an entirely fresh instance, so I chose this config:
gitea:
admin:
username: "forgejo-admin"
password: "12345"
passwordMode: initialOnlyRequireReset
It creates the forgejo-admin
account and sets the password initially to 12345
.
The initialOnlyRequireReset
setting then requires a reset of the password upon
first login, and then the chart will never touch the password again.
And then perhaps the most important setting. The Redis connection string. I only have one Redis instance in my Homelab. So it would be shared between Gitea and Forgejo, which would need to run in parallel while I was migrating the repos.
network=tcp,addr=redis.redis.svc.cluster.local:6379,db=1,pool_size=100,idle_timeout=180
The important piece in this connection string, and all the others in the values.yaml
is the db=1
setting at the end. My Gitea chart had that set to db=0
. And so
did my Forgejo instance during the entire migration. This had some frustrating/funny
consequences I will describe later.
And that’s really already it. All of the other settings are the same as my Gitea instance and described in detail in the previous post I linked above.
Repo migration
At this point, I had Gitea and Forgejo running in parallel in the cluster. The main thing left was to migrate the repositories. Luckily, Forgejo can import repositories from Gitea. For that, I needed to provide an API token for Gitea to Forgejo. This token can be generated by any user, under User Settings -> Applications:

Gitea’s API token generation form.
Once the token is generated, it will be shown at the top of the screen: The token shown after generation.
Then I started the migration. Which was when the frustration began. Forgejo’s
Gitea migration screen looks like this: An example of the migration form.
After hitting “Migrate repository” on the first repo, I got this screen: Forgejo’s migration waiting screen.
And then nothing further happened. After a while, I hit the “Cancel” button. A new modal with a yes/no button appeared. I hit “Yes”. Still nothing happened. I was still on the migration waiting screen. Something had gone wrong. As I could not cancel, I tried restarting the Forgejo instance. Still the same thing, opening the repo brought me right back to this screen. I logged out and back in. Still the same thing. I logged in as admin and checked the repo. Still the same thing. I finally ended up deleting the repo via the admin interface.
Then I tried again. With exactly the same parameters. And exactly the same results.
Starting to get frustrated, I opened the logs of both Forgejo and Gitea. In the Forgejo logs, I only saw these lines, repeating ad infinitum:
2025-05-18 15:35:43.000 router: completed GET /user/task/1 for 10.8.14.218:60046, 200 OK in 48.3ms @ user/task.go:16(user.TaskStatus)
2025-05-18 15:35:42.000 router: completed GET /homelab/homelab for 10.8.14.218:60046, 200 OK in 167.5ms @ repo/view.go:798(repo.Home)
2025-05-18 15:35:42.000 router: completed POST /repo/migrate for 10.8.14.218:60046, 303 See Other in 1266.1ms @ repo/migrate.go:152(repo.MigratePost)
In the Gitea logs, I saw a couple of errors though:
2025-05-18 15:35:43.000 Run task failed: failed to decrypt by secret, the key (maybe SECRET_KEY?) might be incorrect: AesDecrypt invalid decrypted base64 string: illegal base64 data at input byte 0
2025-05-18 15:35:43.000 runMigrateTask[1] by DoerID[2] to RepoID[1] for OwnerID[3] failed: failed to decrypt by secret, the key (maybe SECRET_KEY?) might be incorrect: AesDecrypt invalid decrypted base64 string: illegal base64 data at input byte 0
2025-05-18 15:35:43.000 FinishMigrateTask[1] by DoerID[2] to RepoID[1] for OwnerID[3] failed: failed to decrypt by secret, the key (maybe SECRET_KEY?) might be incorrect: AesDecrypt invalid decrypted base64 string: illegal base64 data at input byte 0
I had no idea what was going on here. Why would there be some decryption error? I was perfectly able to navigate to the repo in the Gitea UI, and I was also able to clone the repo. I just didn’t know what was going on. So I just tried again. And this time it worked. No indication of any issue.
This pattern repeated for all 78 repos I migrated. Almost every repo required multiple attempts at migration. Randomly, some would succeed at the first attempt, others would require a dozen attempts. And I wasn’t able to make any sense of it.
So I just powered through. Spend the entirety of my Sunday doing this. It was very decidedly not fun.
Towards the end, I saw a couple of logs in Gitea like this:
2025-05-18 23:30:00.000 Run task failed: repository does not exist [id: 194, uid: 0, owner_name: , name: ]
2025-05-18 23:30:00.000 runMigrateTask[194] by DoerID[2] to RepoID[194] for OwnerID[2] failed: repository does not exist [id: 194, uid: 0, owner_name: , name: ]
2025-05-18 23:28:58.000 Run task failed: repository does not exist [id: 192, uid: 0, owner_name: , name: ]
2025-05-18 23:28:58.000 runMigrateTask[192] by DoerID[2] to RepoID[192] for OwnerID[2] failed: repository does not exist [id: 192, uid: 0, owner_name: , name: ]
I was getting a bit confused - why was Gitea running migration tasks for repos which weren’t even there? Did Forgejo provide invalid repo IDs in the API requests? For some reason, I did not find it weird that Gitea was even running any migration tasks at all.
But I didn’t care very much - I was finally done.
Enabling Woodpecker
I next went to migrate my Woodpecker CI over to using Forgejo instead of Gitea. This was pretty straightforward, I just replaced the Gitea config variables with the Forgejo ones:
server:
WOODPECKER_FORGEJO: "true"
WOODPECKER_FORGEJO_URL: "https://forgejo.example.com"
extraSecretNamesForEnvFrom:
- forgejo-secret
For full details on how I originally set up Woodpecker with Gitea, have a look at this post. Afterwards, I deleted my old repo configs and added them anew from Forgejo. I don’t think there’s any migration tool to do this, but it was just a half dozen repos, so I didn’t mind too much.
What I did mind was that CI runs did not always get triggered. Sometimes, a push event just wouldn’t trigger the webhook, and Woodpecker would have no idea that a push just happened.
Issues with events going missing
At that point I was starting to think that there’s something seriously wrong with my setup. But I still had no idea what it might be. But I was observing an additional problem: Like Gitea, Forgejo’s profile page by default shows a stream of events, like pushes to repositories or creation of issues for example. And I was seeing that not all events were showing there. The pushes themselves worked, I was able to see the new commits in Forgejo’s UI, but it seemed the event was getting lost somewhere. Which fit the fact that Woodpecker’s webhooks also weren’t triggered reliably.
Still with no idea what’s going on, I left my Gitea instance running while I wrote up a ticket in Forgejo’s bug tracker, see here. I figured that I could reproduce the problem pretty reliably, and the Gitea instance wasn’t using many resources, so perhaps I could help the Forgejo team with debugging.
I then got a few comments, both wondering about why it looked like Gitea was running migrations at all. And one of the comments mentioned that it looked like Gitea and Forgejo were sharing databases. But I was 100% sure that they weren’t.
And then it hit me. They weren’t sharing Postgres DBs - but they were certainly sharing a Redis instance, and using it for queuing! So there was my issue. Gitea was thinking it was asked to run migrations on repos it knew nothing about because it was seeing, and trying to handle, Forgejo’s events. And Forgejo’s migrations weren’t finishing because the actual migration task was getting consumed (and then discarded) by Gitea. And that was also what happened to the missing activity feed and Woodpecker webhook triggering events.
So the issue was entirely homemade. As is only right and proper for a Homelab.
Forgejo is a perfectly fine piece of software and has not given me any grief
at all since I switched it to a different Redis DB by changing the db=0
part
of the Redis connection strings to db=1
.
Conclusion
Spend more time looking for the fault in your own setup should be taken as the main lesson here.
I could have done a lot of other things especially with those few very frustrating hours last Sunday. But at least I’ve now learned another good lesson: Make sure you put your apps into different Redis DBs when they’re sharing an instance.