It’s midnight on a Sunday and I shouldn’t be writing this, I should be going to bed… but it was one of those projects I get every once in a while that was completely stream of consciousness, adjusting as I went, not linear in the slightest. So I want to get it down while it’s still fresh on my mind.
The lay of the land
I’ve had my NAS chugging along as a mostly dumb storage device, running TrueNAS with an iSCSI share that I mount on a Debian virtual machine running under Proxmox on a happy little mini PC. This has been working well enough for almost 10 months at this point. In that previous post, I went over some pros and cons of SMB vs NFS, why I wanted the storage on a secondary PC, etc.
I was not fully happy with this decision at the time, and I knew I would likely end up changing it in future. But I wanted a storage device that worked for the time being, and so I made the jump. Something came over me this weekend and I decided to revisit it. Why I didn’t choose a holiday weekend is beyond me, but inspiration comes when it feels like it I suppose.
Like I said earlier, this process was chaotic. It started as mostly a brainstorming session, then I hopped straight into testing theories - and honestly a bit too hastily into the restructure itself.
I think it’d be best to write out what my structure looks like now that the dust has mostly settled, and proceed to explain my reasoning, some brainstorming, and issues I ran into.
Current (new) structure
Device | OS | Purpose |
---|---|---|
Mini PC | Proxmox Debian 12 | Docker server Reverse proxy |
NAS | TrueNAS | Storage Shares Backups |
Storage structure:
software
- dataset - single SSDapps
- dataset - app installations/configs (currently none in here)home
- dataset - TrueNAS user home directories
tank
- dataset - RAIDz1, 4x 14TB HDDsbups
- dataset, SMB share - all backups, mount to/smb/bups
media
- dataset, SMB share - all media, mount to/smb/media
docker
- zvol, iSCSI share, sparse provision - VM disk for mini PC, mount to/home/user/docker
shares
- dataset - for future potential shareswebdav
- dataset - testing out webdav with TrueNAS app
Reasoning
Trying to boil this down into sensible paragraphs is difficult.
I took my first notes on Tuesday. I was originally thinking about restructuring to separate my Docker configs from my Docker volumes to make the backup part easier - not trying to tar.gz
terabytes of data. (I had made that mistake a few times when moving my files around and not updating my backup script accordingly.) Then I got to thinking about ZFS and the fact that my NAS doesn’t get to take advantage of many of the file-level features because of how I have it structured - a monolithic iSCSI share. I think my mind was made up to make a change when I started looking into Hetzner for cloud backups and I knew I wanted some level of separation on the NAS itself - along with the realization that if I didn’t do something now while my pool usage was at <15%, I’d have to live with it until I got a secondary NAS or took an entire week to transfer data.
After I made the decision to change something, I waffled between options for a while. I’ve tried to translate some of my notes below.
- Option: run Docker containers on the mini PC or on the NAS itself
- Pros of mini PC: more compute, I’m used to it, I’ve already separated the storage network from the rest of the network, CLI level control, Debian compatibility, it’s already worked for 10 months
- Cons of mini PC: either iSCSI as monolithic share, a billion iSCSI mounts to separate storage more logically on the NAS side, or SMB/NFS and an ongoing permissions battle until I grow old and die
- Pros of NAS: same physical system, presumably could use isolated datasets as the equivalent of folders
- Cons of NAS: would need to upgrade or risk impacting storage performance, not sure what to use in terms of containers
- Option: iSCSI shares vs SMB/NFS
- Pros of iSCSI: faster/block level on the mini PC side, already isolated, I’m used to it
- Cons of iSCSI: not getting file-level ZFS benefits, monolithic (with my current structure), old protocol with laughable security
- Pros of NFS/SMB: protocols designed for file-level sharing, potentially accessible to other devices, full benefits of ZFS, presumably better security
- Cons of NFS/SMB: slower (not significantly?), learning curve, permissions oh god permissions
I went through so many iterations of options for this stack. Let me write some of them out, and what my thoughts ended up being.
- Leave everything alone, just separate Docker volumes from Docker configs in the folder structure
- I was already past this point.
- Use a separate iSCSI share for Docker volumes vs Docker configs, leave the rest alone
- I ended up deciding that I wanted to leverage NFS/SMB + ZFS.
- Run Portainer and Nginx Proxy Manager as apps on the NAS, run stacks from Portainer with datasets as volumes.
- I didn’t want to resume my fight with NPM, didn’t know how access for datasets would work if I ran Portainer on the NAS. This did seem like one of the most viable alternatives, though.
- Run custom apps on the NAS - presumably like Portainer, it looked like you could run custom Docker compose (it’s YAML configuration).
- I didn’t want to fight the GUI, it literally only had a spot for an app name and an awful looking 3-line text field to paste YAML configuration. I didn’t trust it one bit.
- Run containers on the NAS
- It’s an experimental feature and I trusted this even less than the previous option.
- Run a VM on the NAS and share relevant directories via NFS/SMB (inspired by this) - keep it all on the same machine, but still have the control that Debian gives me.
- This would logically end up being the same setup that I have except it adds the lovely TrueNAS webGUI as another pain point. No thanks. I did pursue this idea for a while, though, before realizing the NFS/SMB shares would work the same logically.
- Keep the VM I currently have, but see above.
-
- Don’t have to screw with trying to figure out VMs on the NAS and 2) again, it basically ends up being the same setup given that I’ve already separated the storage network. This ended up being the option I dug into the most before figuring out…
-
Then it hit me: why not BOTH? Keep an iSCSI zvol, still reasonably large (I set it to 5TB but sparse provisioned), for all Docker configs. Mount this on my SSH user’s home directory. Then share other datasets - for backups and media - via SMB/NFS.
The reason this occurred to me is I was fighting (and losing to) NFS and permissions when it came to Docker. I had this thought at about 4:30am and immediately proceeded to implement it. Why I didn’t sleep on it is beyond me.
Regarding NFS: every time I’ve tried to use it I can’t stand it. Typically I’m the guy who will use the protocol built for the Linux stack, but I don’t want to touch NFS with a ten-foot pole. Maybe I just don’t understand something foundational about the protocol and how it handles permissions, but Docker just can’t jive with it in my testing. I settled on SMB.
Permissions oh god permissions
(Honestly, I should probably blame this on Docker and the whole UNIX permissions stack as opposed to NFS or SMB, but I digress.)
The biggest thing was - again - permissions. SMB is better in some senses - when I create a file, it’s always owned by the user the directory is mounted as. But I was still running into issues with the Docker containers that mounted media volumes - Syncthing, qBittorrent, etc. If they ran as a privileged user, they could create their internal configs and access their files, but not access the share. If they ran as the same user as the share, they’d have the opposite problem.
Some implement permissions correctly when you run them as a specific user - typically those with options built in to change the UID/GID, like Jellyfin. Others had this conflict no matter what I changed. I’m not going to pretend to be an expert on Docker and the permissions model, but I feel like it’s not just me here. Some work just fine, others don’t.
Eventually what I had to do is mount the SMB share as 777
permissions - all users can have their way with the files. Since I’m the only one using the PC, and the share is isolated to the mini PC anyway, whatever. This eliminates all the headache. Another nice thing about SMB is the fact that whoever creates a file, it’s always owned by the same user. So if my user account modifies something or creates a file/directory, or the root user does, it’s still always owned by media
. (Not that it matters when every file is drwxrwxrwx
but hey, it’s the thought that counts. And it is a little annoying, especially how my terminal highlights the files based on their permissions, but it’s really just the default way that Windows drives show up when I mount them to Linux anyway…)
Here’s the entry from my /etc/fstab
(finally something technical!):
//10.20.99.37/media /smb/media cifs rw,uid=1001,gid=1002,\
credentials=/home/media/.smbcredentials,\
file_mode=0777,dir_mode=0777 0 0
Where the media
user is UID 1001
and the media
group is GID 1002
.
Now that I have this working this way, I don’t even know if I need an iSCSI share - I could just mount a Docker directory as an SMB share and call it a day. I’m leaving it be for now, though… I am tired of working on this.
EOF