tl;dr: configure things with text files. It’s just the moral thing to do.

Disclaimer

I really haven’t been feeling it the past few weeks, just in a rut in terms of inspiration and documentation - partially because of this issue - so I’m writing a lot of this down after the fact. It’s going to be a rant/ramble and probably largely incoherent. I will try to document my current setup with functional certificate auto-renewal, though.

Much of this is because I’m in way over my head in terms of web server configuration, and when you add a GUI layer abstracting much of that configuration - what is it really doing when I tick this box? Then if I have to jump into the Advanced tab and paste configuration - I don’t really know what these things do in the first place, then I don’t know if I’m applying them in the correct place, then I don’t really know if NPM is even applying them at all!

I’m sure this is solvable with NPM, if you know how to configure Nginx in the first place, but the point I’m trying to make here is - fuck that.

Maybe I’ll come back at some point in the future if I ever really learn Nginx properly.

Regrets

I was singing the praises of Nginx Proxy Manager, had migrated all my configs over, and was (for a short time) very happy with the result. Maybe I was too harsh on GUIs after all.

Then I tried to set up Nextcloud. Then I tried to set up Filebrowser Quantum. I had nightmares about 502 bad gateway errors the whole weekend. Before I even got around to publishing that post, which I wrote over a week ago, I have since migrated all the way back to bare metal Nginx.

The benefits of using NPM that I outlined in the previous post are still mostly applicable. A particular complaint I have now, though, is how NPM gives little direct control over the configuration file, obfuscating most of the settings through checkboxes and text fields. You know, like how all GUIs operate. I spent the entirety of my Friday evening trying to get Nextcloud to function over my NPM reverse proxy.

I thought, okay, maybe it’s just Nextcloud. I fought with a million different compose configurations - Nextcloud AIO (which I wasn’t a fan of, as it deployed several containers outside of the Portainer stack, and didn’t play nice with the existing NPM reverse proxy setup - came with its own separate reverse proxy recommended by default), the unofficial Nextcloud Linuxserver.io image, fiddled with the compose files and did one completely custom, fiddled with every imaginable setting, custom Nginx settings, and location settings in NPM. All to no avail.

I had to know. I shut down NPM, started up my trusty Nginx systemd service, pasted the Nginx config I had been working on for the Nextcloud subdomain. And it just worked.

I couldn’t just give up then. I knew it should work. I got to the point of going into the NPM Docker container shell and editing the configuration file directly. Nothing worked.

Okay, I thought, surely it’s just Nextcloud. I tidied everything up and went to bed. Next day I came across Filebrowser Quantum. I already had a working Filebrowser subdomain in NPM, this would be cool to try out.

Same thing. 502s all across the board. Nothing worked.

Trying Nginx in Docker

Ultimately, I didn’t like the idea of playing Russian roulette with any given Docker container. I had no issue that I wasn’t able to solve previously, and the NPM approach didn’t really fix anything. The only thing it did better was auto-renew my DNS challenge wildcard certificate, and I also liked the fact that it was isolated in a Docker container. Those were really the only two things it had going for it, and at this point it had a lot going against it.

The next thing I tried was spinning up an Nginx container, to maintain the Docker isolation at least. I would still have to figure out the certificate auto-renewal, but 1) that was what I originally planned to do before having this brilliant NPM idea and 2) I was managing just fine without it anyway.

Admittedly I didn’t really give this my best shot - middle of the work week, and as I said I wasn’t really feeling it in the first place - but the hierarchy of configuration directories and mounting them into the container was just not straightforward at all, especially when getting Letsencrypt involved. Completely differed from I was used to with the previous configuration and it seemed like I was always missing something or pointing to the wrong place. Didn’t feel like it was worth the effort when I had a fully operational Nginx configuration on the server already.

So I spun that one back up.

Implementing Letsencrypt auto-renewal with DNS-01 challenge

I tried this with --manual-auth-hook scripts using a Cloudflare API token (here) which I think I was misunderstanding. I then tried with python3-certbot and python3-cloudflare. This ended up with an error something like “to use cloudflare tokens, you’ll need the python package cloudflare>=2.3.1. This certbot is running cloudflare 2.11.1”, but Debian is not rolling release, and the cloudflare package I had was the newest version - so I was SOL there.

This post saved my ass. I will mostly let the post explain, but for abridged steps:

  1. Create a Cloudflare API token for the domain, in the Cloudflare web admin panel.
  2. Place that token in a read-only-by-root file (/root/.secrets/cloudflare.ini).
  3. Install python3-certbot-dns-cloudflare and python3-pip.
  4. Run the following command:
certbot certonly \
--dns-cloudflare \
--dns-cloudflare-credentials .secrets/cloudflare.ini \
-d "*.domain.com"

And you have yourself one wildcard certificate.

I was under the impression I would need to add this to my crontab, but nope. When run like this, Letsencrypt creates a cron job automatically, or should - it’ll tell you in the output of the above command.

To renew manually, simply:

certbot renew

I was used to doing this by hand, now it works just like a publicly accessible domain. Running certbot renew --dry-run will verify that everything is configured correctly for a renewal.

EOF