Much was involved in this process, and I am documenting it in post since it was on a whim over a couple weeknights. I ended up doing this in two pieces: first migrating to Portainer for docker management, then moving my nginx configuration to Nginx Proxy Manager (henceforth npm). It wasn’t very linear, but in retrospect there’s a clear order to document.
Migrating to Portainer
I stumbled across Portainer recently, and was very impressed, 1) with the general interface and the visibility into Docker that it provided, and 2) with the fact that it connected into an existing Docker install - I really didn’t have to do any configuration to immediately start reaping the benefits of using it. Now that I’ve learned to manage Docker “the hard way”, and picked up a general understanding of how things operate on the CLI, I figured it was time I could move over entirely to a GUI interface. CLI is great, and it’s generally my preference to learn it first, but here recently I’ve come to appreciate how well-designed GUIs can provide a much more intuitive or visual understanding of certain configurations. I keep adding more and more Docker services, and retiring old ones, and futzing with configurations, and it’s generally become a hot mess. I got ample use out of the docker system prune
command, but I didn’t realize how many things that left hanging around.
Maybe I should have done a “Docker for dummies” course before really diving into Docker, and I would know all this, but I guess I just like pain. Portainer was able to give me the visibility into all my Docker services that I didn’t realize I didn’t have. Yes, I could have done this on the CLI, and I still can, but another thing I’m coming to appreciate (I’m just now having this thought) about GUIs is clicking is universal. If there’s a settings menu, I’m going to click on it. It lends itself to a natural curiosity. If I don’t know a CLI command, and I don’t know that I need it, I’m not going to go digging for it.
So, Portainer has been a good learning experience. Taking the next step into what Docker really offers.
First thing I did was stop and remove all Docker services I managed from the CLI, and deployed them as stacks in Portainer. This was fairly easy - just copy pasting a docker-compose.yml
file per service, generally. But I did run into a couple issues.
The biggest thing was referencing environment variables in the Docker config when they aren’t located in an .env
file, and instead pasted in to the Portainer UI. I ran into this while redeploying Immich - kept getting a postgres incorrect username/password error. The gist of it is that you need to use stack.env
instead of just .env
if the compose file references an environment file. I feel like there’s more to learn here, and more to learn in general about how to / best practices to manage environment variables, but I don’t feel like another deep dive at this point. I’ll get there.
The other issue was more of a ramification of moving to Portainer instead of managing from compose files in my home directory. Previously, I had a script called dockerpull
that would loop through my Docker directories run docker compose pull && docker compose up -d
, set as a root cronjob. That wasn’t going to work the same way with Portainer. Luckily for me, Watchtowerexists, and was dead simple to set up.
The remainder of this configuration was just cleanup. It honestly wasn’t too difficult. And now I don’t have to SSH into my machine to mess with Docker services :D
Migrating from nginx systemd service to npm
Mine eyes
First: dark theme or we riot. I’m not migrating without a dark theme. It’s against my morals.
Happily, this ended up being very simple to accomplish.
- Download this file and save it to a directory (I have a standard place I put all docker volumes)
chmod +x 98-themepark
to make it executable- Add this as a volume to your docker-compose file:
volumes:
- /your/save/path/98-themepark-npm:/etc/cont-init.d/99-themepark
Dark theme acquired.
Generating SSL certificate and locking down the admin panel
npm comes with ports 80, 81, and 443 exposed by default. Obviously I would like to lock down the management interface (port 81) behind a valid SSL certificate and subdomain like I’ve done with my other services.
Also, at this point, I had to nuke the nginx running via systemd due to port conflicts, and restart Portainer to be accessible via 0.0.0.0
http so I could manage services until I had npm up and running.
Generating an SSL certificate is easy, Letsencrypt is built into the GUI. Even better - you can use a DNS challenge if your domain is not publicly routed. And BEST - you can automate the certificate renewal process through your cloud provider’s API. I was already planning to do this within the next month or so. Bonus!
- Domain Names:
*.domain.com
- Select “Use a DNS challenge”
- Select your DNS provider (mine is Cloudflare)
- If using Cloudflare, create an API key in their backend with edit access to your domain’s DNS records. Paste that API key where specified here.
And your certificate has been generated! I will have to check in on the renewal process, but it should be happening automatically.
Next we proceed to use that certificate to create a valid subdomain, so that we can access the npm web interface behind the new reverse proxy. Go to Hosts → Proxy Hosts, add a new one:
- Details tab
- Domain names:
npm.domain.com
- Scheme:
http
- Forward Hostname / IP:
npm
(this uses the name of your Docker container) - Forward Port:
81
- Cache assets: checked
- Block common exploits: checked
- Domain names:
- SSL tab
- Select your newly created wildcard certificate
- Force SSL: checked
- HTTP/2 support: checked
- HSTS enabled: checked
Should auto-redirect to https if you go to npm.domain.com
.
This does leave one issue though: the service is still accessible via http://[ip-address]:81
. To fix this, all you need to do is remove port 81 from the docker compose file, so it’s not exposed outside of the Docker network.
Creating Docker network for proxied services
Now this is where I started running into some Docker - maybe it’s more fair to say container/isolation - quirks.
Up to this point, I’ve used the scheme 127.0.0.1:[port-on-host]:[port-in-docker]
to expose a Docker port to localhost so that I can then have nginx redirect to that port via subdomain. Turns out, when your nginx is running as a Docker container with its own network, that doesn’t exactly work.
Since each service has its own network, you end up with a bunch of new network interfaces, and containers - by design - can’t really talk to each other. I kinda need npm to be able to see the other Docker services.
Once more - as seems to be a theme for this project - the fix was fairly simple. Create a network, and add all the Docker services that you want to proxy to that network.
Create an external Docker network:
docker network create npm_network
Add all services to that network including npm by editing their compose files (adding two sections - one per service, and one global):
# docker compose file
services:
# ...
networks:
- npm_network
# ...
networks:
npm_network:
name: npm_network
external: true
That’s pretty much it. Now, do need to keep in mind that the containers can communicate.
Creating npm proxies
I ran into some odd issues - probably because I was configuring incorrectly at some point in the process - but this has worked for every single service I’ve tried at this point. Basically the same as locking down the npm admin panel, but I’ve expanded it a little bit.
In each docker-compose.yml
, edit each port to be 127.0.0.1:[port-on-host]:[port-in-docker]
as previously specified. This means the ports are only exposed to the localhost, and not accessible via http://[ip-address]:[port]
via web browser on the LAN. Also make sure there are no conflicting ports between running services.
Then for each service, go to Hosts → Proxy Hosts, and add a proxy:
- Details tab
- Domain names:
[subdomain].domain.com
- Scheme:
http
- Forward Hostname / IP: `[name-of-docker-container]
- Forward Port:
[port-on-host]
- Cache assets: checked
- Block common exploits: checked
- Domain names:
- SSL tab
- Select your newly created wildcard certificate
- Force SSL: checked
- HTTP/2 support: checked
- HSTS enabled: checked
That’s it!
I did play around with trying to pass custom headers via the Locations tab, but npm’s defaults seem to cover those. The ones that needed specific headers in my manual config file seem to be working just fine with npm..
The only container that I didn’t manage to get up and running - granted, I didn’t really try - was the qBittorrent with Gluetun container. I wasn’t sure how the network_mode
and networks
would interact and I knew that would send me on another rabbit hole.
EDIT: I just set that one up. Turns out, all the port exposure is done through Gluetun, and I just need to add the Gluetun container to the network and leave the qBittorrent container as network_mode "container:gluetun_qbit"
. Simple!
EOF