Setting the stage

At this point in my self-hosting career, I’ve made some significant upgrades to my home LAN. No longer is it just a couple machines I access every now and then - it consists of several services that I’m planning to use as my daily drivers, as well as more storage space than I’ve ever had to work with before. Jellyfin for my media server. Pi-hole for my local DNS. And the thing that really spurred this, a working WireGuard bounce server to access everything remotely, consistently.

I’m really getting tired of typing in the IP addresses and ports of everything on my LAN. Sure, Firefox can bookmark a keyword (so that I can type “jelly” into my browser and it automatically redirects me to http://10.0.99.40:8096), and everything between the WAN and my LAN is encrypted over a WireGuard VPN. So it could work. I don’t have to get everything running via https, with domain names.

But I want to.

So to outline the goals of this project:

  1. Local DNS resolution. I want to type the name of a service, in the form of a subdomain, into the address bar of my browser, and my browser to take me there. I don’t want to have to type https:// or use custom bookmarks.
  2. All services running via https, and automatically redirect to https.
  3. No ports exposed to the WAN except for my WireGuard port (and a couple other port forwards for specific services).

Eventually - I’d like to use non-self-signed certificates. It’s a little beyond me right now. I looked into Traefik, Caddy, and Tailscale. Traefik was intimidating and I don’t know how it works - I’m hesitant to implement it until I can get a better idea of how it works. Caddy seemed like it would work, but it’s another Docker service. Tailscale is just souped up Wireguard and you have to sign up for an account (I’m being a little facetious here, I’m a little conflicted on Tailscale. There is a self-hosted version, headscale, that I am considering. I’m also considering biting the bullet and signing up. I have done a lot of what I came here to do in terms of learning, and part of me feels like it would be really nice to have something that Just Works for once…)

Anyway, for now, I can deal with ignoring the browser warnings.

Initially, I tried the Docker service nginx-proxy-manager. By all accounts it should have worked, but for whatever reason, it didn’t - my best guess is the virtual interfaces that Docker creates were somehow borked. At this point in time, I have NO idea how to begin troubleshooting that, and I didn’t want to try to figure it out.

I had previously set up a successful Docker subdomain configuration on my webserver, with valid https certificates via certbot and Letsencrypt, and I figured I would take a shot at running one on my LAN.

So if you’ll join me for the config, here’s what we’ll need:

  • Locally A/AAAA/CNAME records - I used my pi-hole DNS
  • A server with:
    • at least one Docker service, exposed on an internal port
    • nginx installed
  • For this to work remotely: you need WireGuard to be configured to use your LAN DNS. I already had this set up.

Local DNS on pi-hole

  • From the pi-hole web interface, go to Local DNS DNS records
    • Add a new domain (I used nas.local) and an associated IP (in my case, 10.0.99.40)
  • Then go to Local DNS CNAME records
    • Add a new subdomain (I used file.nas.local) that points to the A record (nas.local)

And that’s it! Add as many subdomains as you’d like; e.g. I have 5 that are currently functional. They’ll all point back to the same A record. If you want to make them accessible via their subdomain and https, you’d just add a separate certificate and a separate server entry into your nginx config.

Generate certificate(s)

It’s on my todo list to get away from using openssl and to use libressl - but it’s not really a priority for me right now. First it would help to understand more of the basic options here. I copy pasted this command from somewhere and didn’t really dig into the specifics.

# full command
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/selfsigned.key -out /etc/ssl/certs/selfsigned.crt
 
# a helpful alias if generating many certificates
alias selfsign="openssl req -x509 -nodes -days 365 -newkey rsa:2048"
 
selfsign -keyout /etc/ssl/private/selfsigned.key -out /etc/ssl/certs/selfsigned.crt

You’ll want to generate one for both the root domain and all subdomains. Then just configure nginx as below, redirecting to https

Edit nginx sites-available file

# /etc/nginx/sites-available/nas.local

# default server, uses its own certificate
server {

	listen 443 ssl default_server;
	listen [::]:443 ssl default_server;
	include snippets/self-signed.conf;
	include snippets/ssl-params.conf;

	root /var/www/nas.local;
	index index.html;
	server_name nas.local;

	location / {
		try_files $uri $uri/ =404;
	}
}

# 301 redirect for primary domain AND subdomains
server {
  listen 80;
  listen [::]:80;

  server_name *.nas.local;

  return 301 https://$host$request_uri;
}

# reverse proxy for filebrowser docker service running on 127.0.0.1:8080
server {
  server_name file.nas.local;
  location / {
    proxy_pass http://127.0.0.1:8080;
  }

  listen 443 ssl;
  ssl_certificate /etc/ssl/certs/file.nas.local_selfsigned.crt;
  ssl_certificate_key /etc/ssl/private/file.nas.local_selfsigned.key;
  include snippets/self-signed.conf;
  include snippets/ssl-params.conf;

}

I can’t claim that I understand a ton here, I’m not the most versed in nginx. I copied some of it from my then-functional config using Letsencrypt + certbot on my VPS. Also, for certain services, I needed some additional configuration here to get it to work - such as passing headers, which I understand even less at this point in time.

Here’s what the snippets/self-signed.conf and snippets/ssl-params.conf look like.

# /etc/nginx/snippets/self-signed.conf
# this points to the root certificate and key

ssl_certificate /etc/ssl/certs/nas.local_selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nas.local_selfsigned.key;
# /etc/nginx/snippets/ssl-params.conf

ssl_protocols TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM;
ssl_ecdh_curve secp384r1;
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

# Disable strict transport security for now. You can uncomment the following
# line if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";

Editing docker-compose.yml

Make sure to configure any services to be available at 127.0.0.1:[port]:[port] instead of the default [port]:[port], if you don’t want the service to be available over http at [server-ip]:[port].

This can depend on your needs. For example, I had to leave my FreshRSS accessible via the ip directly for now, as the mobile app I use, FeedMe, throws an error when the certificate is self-signed.

I’m also troubleshooting my Jellyfin configuration right now, it doesn’t seem like the mobile app is able to find the server. I’m thinking it’s a similar issue.

Wrap-up

So, this has been a good learning experience, but I am discovering that self-signed certificates are just not the way to go. Upon looking into the Jellyfin issue further, it looks like the mobile apps will simply not support it based on how the underlying OS (Android) handles certificates.

Ultimately, I foresee this turning out to be endless workarounds and tweaks and I’d rather not deal with all that. If I can find a way to get away from using self-signed certificates in the first place, that would be massively preferable.

This means I will be looking further into one of the following:

I already did some preliminary digging, and I’ve got some good places to start.

EOF