Wherein I end up replacing my Brief setup for RSS with FreshRSS.
Over the holidays, I visited my family and only had my laptop with me. While I
have most things properly synced, my RSS feed subscriptions are not. Up to now,
I’ve been using the Brief Firefox extension.
It looks like this: Example of the Brief UI
And it was fine. I really don’t need much from an RSS reader. I don’t tend to read posts in my feed reader at all, it’s really just an aggregator for me. When the headline interests me, I read the article on the original page.
The big problem with Brief was the fact that occasionally, I would be on the road, and hence away from my desktop. And I would not have all of my blogs around to read. Which isn’t that annoying from the perspective of not having the current reading state of individual articles around. But rather the issue is that I also didn’t have my subscriptions synced on my desktop and laptop setups.
Nextcloud News
Writing a Fediverse post about my woes, Rachel noted that Nextcloud has an RSS reader with Nextcloud News, which could safe me some setup compared to standalone solutions like Miniflux.
The install is pretty simple, but I hit a problem due to the way I’m handling Nextcloud’s cron. As I’ve noted in my Nextcloud setup post, I’m using the Webcron option, with a separate container which regularly hits the required endpoint and triggers Nextcloud’s background jobs. But this was a problem for the setup of News. As per its docs, it cannot work with Webcron. That’s because News has to run the feed fetching via the cron setup, and remote content fetching can take a while. So it’s restricted to using a normal cron job. I took this chance to finally dig deep enough into my setup to be able to use cron properly.
But before I did so, I had a look at the cron option offered by the News app. It’s a Python script which does the feed updates. I disregarded this option because it seems to require a Nextcloud admin account.
Next, I looked at options to run Nextcloud’s cron with a real cron job. This is famously complicated in a containerized setup, but Nextcloud provides an example in their docker-compose:
cron:
image: nextcloud:fpm-alpine
restart: always
volumes:
- nextcloud:/var/www/html:z
# NOTE: The `volumes` config of the `cron` and `app` containers must match
entrypoint: /cron.sh
depends_on:
- db
- redis
Reproducing this setup in Nextcloud’s Pod resulted in this error:
crond: can't set groups: Operation not permitted
So I’d have to run the container with root permissions. Instead of doing that,
I decided to just re-write my original web cron script a little bit:
#!/bin/bash
echo "$(date): Launched task, sleeping for ${INITIAL_WAIT}"
sleep "${INITIAL_WAIT}"
while true; do
php -f /var/www/html/cron.php 2>&1
echo ""
echo "$(date): Sleeping for ${SLEEPTIME}"
sleep "${SLEEPTIME}"
done
That container then got all of the mounts and env variables of my main Nextcloud container, and now I’ve got Nextcloud’s cron running via this cron job, instead of Webcron.
The web interface looks like this:

Example of the Nextcloud News UI.
There are two things I didn’t really like. One is that the feed an article is coming from isn’t shown in the list. That should not matter too much for most use cases, because the favicon is still shown. But starting to use YouTube’s RSS feeds was one of the things I wanted to do, and of course all of those feeds would just have YouTube’s favicon.
Also note the order of the videos. They’re not ordered purely by publishing date. Instead, the order seems to be first by feed, and only then by publishing date. Which for me ruins the usability of combined feeds like the YouTube folder here. This is a known issue, and seems to be related to the architecture of the News app if I’m reading the issue’s comments correctly.
At the same time, those two problems seemed to be only related to the UI. So I decided to look around for a desktop client for RSS. I ultimately landed on RSSGuard.
It works nicely with Nextcloud News and can properly sync feeds and the read/unread state of articles. One thing I’m not sure about whether it’s me being a bit incompetent, but it looked like adding feeds was not possible in RSSGuard, only via the News web interface.
RSSGuard looks like this:

Example of RSSGuard
I liked this interface a bit better than News’ web UI. The main issue here was that I don’t really like separate apps for things these days. For most things, I’d rather prefer a nice web interface.
In addition, I also realized another annoying thing about Nextcloud News. It seems that it uses the “Last updated” date for article dates, not the published date. This, too, I find pretty annoying. Take for example the topmost video in the above screenshot. It’s this one. The date shown by both, Nextcloud News and RSSGuard, is 2025-12-24. But the video was actually published on 2025-10-07. I looked around a lot, and couldn’t find an option to switch to always using the publishing date, not the date the article was last updated.
This finally put me off Nextcloud News.
FreshRSS
Looking at other options, I finally decided on FreshRSS.
It’s written in PHP and supplies a container for deployments out of the box. It also supports OIDC for SSO and works nicely with my Keycloak instance. For data storage, it supports all the mainstream ones, including MySQL, PostgreSQL and SQLite. As I’m not foreseeing much load, I decided on staying with SQLite. Besides the database, it also needs some space for stuff like cached favicons.
The container already comes with an Apache instance, so no further web server for delivering static assets is required. The container also comes with a cron daemon, so there’s no need for setting up a separate process for triggering the feed update.
The setup in my Kubernetes cluster was pretty straightforward, so I will only provide the Deployment manifest here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: freshrss
spec:
replicas: 1
selector:
matchLabels:
homelab/app: freshrss
strategy:
type: "Recreate"
template:
metadata:
labels:
homelab/app: freshrss
spec:
automountServiceAccountToken: false
securityContext:
fsGroup: 1000
containers:
- name: freshrss
image: freshrss/freshrss:{{ .Values.appVersion }}
volumeMounts:
- name: freshrss
mountPath: /var/www/FreshRSS/data
subPath: data
- name: freshrss
mountPath: /var/www/FreshRSS/extensions
subPath: extensions
resources:
requests:
cpu: 200m
memory: 500Mi
env:
- name: TZ
value: "Europe/Berlin"
- name: CRON_MIN
value: "2,32"
- name: LISTEN
value: "0.0.0.0:8080"
- name: FRESHRSS_ENV
value: "production"
# My main Traefik instance as well as my k8s Pod CIDR
- name: TRUSTED_PROXY
value: "10.1.1.1 10.2.0.0/16"
- name: OIDC_ENABLED
value: "1"
- name: OIDC_PROVIDER_METADATA_URL
value: "https://login.example.com/realms/example/.well-known/openid-configuration"
- name: OIDC_REMOTE_USER_CLAIM
value: "preferred_username"
- name: OIDC_SCOPES
value: "openid profile"
- name: OIDC_X_FORWARDED_HEADERS
value: "X-Forwarded-Host X-Forwarded-Port X-Forwarded-Proto"
- name: OIDC_CLIENT_ID
valueFrom:
secretKeyRef:
name: oidc-secret
key: id
- name: OIDC_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oidc-secret
key: secret
- name: OIDC_CLIENT_CRYPTO_KEY
valueFrom:
secretKeyRef:
name: oidc-encrypt-key
key: secret
ports:
- name: freshrss-http
containerPort: 8080
protocol: TCP
volumes:
- name: freshrss
persistentVolumeClaim:
claimName: freshrss-volume
Similar to what I wrote above on Nextcloud and cron, the FreshRSS container
needs to run as root, as it is running a cron daemon. The Apache instance drops
privileges and uses the www-data user with UID 33 though.
The CRON_MIN configuration configures the cronjob updating feeds to run
every 30 minutes, at hh:02 and hh:32.
Upon first visiting the FreshRSS URL, it will show a few setup pages for configuring the initial user/admin account and the database. When using OIDC for authentication, some care has to be taken: The username for the new user needs to be the same as the OIDC username. The relevant docs can be found here. The password provided on the page is not relevant, as it won’t be used when OIDC auth is enabled.
As Keycloak is not among the documented OIDC providers in the FreshRSS docs, here is a short overview of the config which worked for me for configuring the client in Keycloak:
- Root URL: https://freshrss.example.com
- Home URL: https://freshrss.example.com
- Valid Redirect URIs: https://freshrss.example.com:443/i/oidc*
- Weirdly enough, the port is necessary here, as the FreshRSS container does provide the redirect URL exactly like this. Without the port, Keycloak will reject the request
- Valid post logout redirect URIs: https://freshrss.example.com/*
- Web origins: https://freshrss.mei-home.net
- Client Authentication: On
- Authorization: Off
- Standard Flow: On
- All other check boxes off
Adding Feeds
Once the install was complete, I could start adding feeds. This is what FreshRSS' UI looks like:

Example of the brief UI right after finishing the setup.
The “FreshRSS releases” is a GitHub releases RSS feed for FreshRSS which is added by default for all new users.
Note the “Received today – 4 January 2026” line at the top. I don’t really like this, as I don’t really care when an article was fetched, but rather when it was published. This can be changed via dropdown:

Switching to sorting the posts by publication date.
Addition of a new feed works through the “+” at the top of the menu. It leads
to this form: The feed and category addition UI.
In the ‘Add a feed’ form, the ‘Feed URL’ doesn’t need to be the full URL of the feed’s XML file. FreshRSS can scan for the typical RSS links. E.g. when adding my blogs home page into the field, it doesn’t have any problem finding the correct RSS URL at https://blog.mei-home.net/index.xml.
The “Type of feed source” section contains additional options, which allow for scraping a website which doesn’t provide an RSS feed and adding that to FreshRSS, but I haven’t tried that myself.
The “Advanced” section contains additional options, like setting additional headers to be send while fetching the feed or setting credentials for auth.
I don’t want to make this post any longer than it is already going to be, so I will provide all the sites I subscribe to in a follow-up. But I wanted to note two things. First, GitHub provides RSS feeds on the release pages of projects, as the FreshRSS feed already demonstrates. And I’m also using YouTube’s feeds. They provide an RSS feed per channel, and I’m now using that instead of YouTube’s subscriptions page. The one thing I’m missing are the video durations. E.g. when cooking, I like to put on a longer video to listen to. But I can’t see the durations in FreshRSS, as they’re not provided as part of the RSS feeds. Another annoying thing is that the feeds cannot be filtered to only proper videos. You also get the shorts when subscribing to a channel. This annoys me a bit, but luckily most of the channels I’m following don’t do a lot of shorts. I’m also going to have a look at FreshRSS’ filtering functionality. I’m pretty sure that it should be possible to filter the shorts via that feature.
Open Sourcery
While working on setting up FreshRSS, I was again reminded why I love Open Source. One of the blogs I read wasn’t getting added to FreshRSS. When trying to add it, I was getting this error in the logs:
A feed could not be found at `https://blog.example.com/index.xml`; the status code is `200` and content-type is `` [https://blog.example.com/index.xml]
That was pretty weird, for two reasons: One, Brief didn’t have any issues adding this blog and handled it perfectly fine. And two, the blog is set up very similar to mine - running Hugo, even with the same theme, and backed by a Ceph S3 bucket, fronted by a Traefik instance. Even the Traefik setups are pretty similar. And yet, my blog worked fine in FreshRSS, and the other blog also worked fine in Brief.
The next thing I tried was appending #force_feed to the feed URL, as proposed
in some FreshRSS issues for cases where the feed wasn’t getting added properly.
That resulted in an error again, but this time with a different message:
A feed could not be found at `https://blog.example.com/index.xml`. Empty body. [https://blog.example.com/index.xml#force_feed]
Empty body? I went ahead and curl’ed the index.xml. It worked perfectly fine,
no complaints. The content also looked fine. I verified that with the
W3C Feed Validator, and while it showed a few
warnings, it didn’t have any major issues with the feed either.
Checking the cURL output a few more times, I started comparing it to the output
for my blog - as I said, our setups are pretty similar. And I finally found the
one major difference: The blog which wasn’t working in FreshRSS was sending
a Content-Encoding: aws-chunked header, while mine wasn’t. And looking at that
header’s docs,
it seemed to be intended to indicate the compression algorithm used. And aws-chunked
wasn’t among the normal values allowed for that header.
I assumed that the issue was somehow related to the fact that the blog was delivered from a Ceph S3 bucket, but wasn’t able to figure out anything more. But I did wonder why curl’ing on the command line worked without issue, but FreshRSS had problems. And here is why I love Open Source software: Instead of only being able to file an issue with the project, I was able to check what’s wrong myself.
FreshRSS has good developer documentation. I cloned the repository, and then launched a test instance like this:
podman run --rm\
-p 8080:80\
-e FRESHRSS_ENV=development\
-e TZ=Europe/Paris\
-e 'CRON_MIN=1,31'\
-v $(pwd):/var/www/FreshRSS\
-v freshrss_data:/var/www/FreshRSS/data\
--name freshrss\
freshrss/freshrss:edge
I don’t speak PHP at all, but I was still able to litter a few print statements
around the code, and finally figured out that after trying to fetch the index.xml,
the body of the response was indeed empty. That’s why the initial attempt said
that there was no feed found, and why the attempt with #force_feed showed an
Empty Body issue.
Then I looked at the actual fetching code here. The interesting part was this:
if (curl_errno($fp) === CURLE_WRITE_ERROR || curl_errno($fp) === CURLE_BAD_CONTENT_ENCODING) {
$this->error = 'cURL error ' . curl_errno($fp) . ': ' . curl_error($fp); // FreshRSS
$this->on_http_response($responseBody === false ? false : $responseHeaders . $responseBody, $curl_options);
$this->error = null; // FreshRSS
curl_setopt($fp, CURLOPT_ENCODING, 'none');
$responseHeaders = '';
$responseBody = curl_exec($fp);
$responseHeaders .= "\r\n";
}
In my tests, FreshRSS runs into this if condition, with the CURLE_BAD_CONTENT_ENCODING.
Printing the $this->error value gives this result:
cURL error 61: Unrecognized content encoding type. libcurl understands deflate, gzip, br, zstd content encodings
Checking further and printing the $responseHeaders value as well shows that
Content-Encoding header is set here as well:
HTTP/2 200
accept-ranges: bytes
content-encoding: aws-chunked
content-type: application/rss+xml
date: Tue, 30 Dec 2025 22:50:37 GMT
etag: "xxx"
last-modified: Sat, 13 Dec 2025 21:23:42 GMT
server: Ceph Object Gateway (squid)
x-amz-meta-md5chksum: xxx
content-length: 11242
The original intention of this code seemed to be to disable content-encoding
in case there was an encoding error. And the expectation was that that the second
curl_exec call would then be successful. But it just returned the same error
again, and importantly, did not set the body. But crucially to the rest of the
fetching code, it still stores the HTTP return code - which was “200”. So all
following code assumed that the fetch was successful.
Then I looked at the documentation for the CURLOPT_ENCODING option, which is
set to 'none' in the above code. And I found that it was obsoleted by the
CURLOPT_ACCEPT_ENCODING option
a long time ago. And that 'none' wasn’t actually a valid value. When this option
is set, cURL will always try to decompress the response, as it will always
assume that it needs to. But also always checks whether it actually has support
for the Content-Encoding value in the response. And if it doesn’t it shows the
above error.
But it looked to me like FreshRSS already had this specific branch of the code
to handle specifically this issue, but it did not work (anymore?). Reading through
the option’s docs, it seemed that it instead needed null to be set to completely
disable the handling. So I changed the CURLOPT_ENCODING option to be set to
null instead of 'none'. And now the feed was added without any issue.
Open source is an absolutely amazing thing.
I also created a ticket on FreshRSS here, and my fix has already been merged and should find its way into the next FreshRSS release.
That was a very satisfying investigation. 🙂
Concerning the actual issue with sending the header: After some discussion with
the author of the blog, we were able to figure out that the one difference in
our setup is that I’m using s3cmd to push the files generated by Hugo to the
S3 bucket. They’re using Hugo’s deploy
feature. As best as we could figure out, the AWS SDK used by Hugo automatically
sets the header when pushing to a bucket. AWS S3 then just uses the header during
the PUT operation, but doesn’t store the fact that the header was set. So it will
not be returned as part of a response. But Ceph S3 seems to be set up differently,
and when the Content-Encoding header is set during the push, it will also
return it as part of the response to a GET request.
And that’s it for this one. I hope you all made it safely into 2026, and I wish you all a happy new year. 🙂