Overview
Some helpful threads (a lot of which admittedly went way over my head):
- TrueNAS community Path to success for block storage
- TrueNAS community Using 100% of disk space with iSCSI
Thus far in my NAS journey, I am a baby. I’ve set up a TrueNAS box, which primarily involved spending a lot of money and building a slightly souped up computer, things which I have done previously and thus weren’t too difficult. When it comes to ZFS - terminology, implications, and honestly preliminary knowledge that I should have picked up before jumping in headfirst - I’m mostly lost.
I fought at length with NFS shares before finally landing on an iSCSI (which I recently learned SCSI is pronounced “scuzzi” - heard that term before but never made the connection), outlined in this post.
What I was unhappy with at the time, and took as a matter of course, was the fact that I had to reserve space up front for the Zvol. Or, so I thought at the time. Over the past couple days I have done a little more research into the matter and realized that there is a way to “thin” provision, which is different from the Zvol default of “thick” provisioning. Thin is basically allocation on demand, thick is allocation up front. Thin is also the default for datasets, which is one reason I fought with NFS so long. I didn’t want to provision the space up front, as I thought was necessary for a Zvol.
In this post I’ll document how I reprovisioned my existing thick provisioned Zvol into a thin provisioned Zvol. I ran into a couple things that tripped me up initially.
I am trying to get better about really thinking about the implications of my decisions in terms of design - pros and cons, long term effects, etc. I had to make this decision and come up with a solution now before the data I was storing on my NAS became much larger - I have about 37 TB (TiB? T? This terminology still confuses the hell out of me, I can’t keep the base 2 vs base 10 straight, and every tool seems to display it differently, but I digress) usable space, and I had allocated approximately 80% of it to the Zvol. I had also been reading that >80% usage on ZFS is no bueno and you really start to see some performance dips. Basically, I had less than 5 TB before I could no longer backup the existing data to another dataset or Zvol - I figured it was going to be a far better course of action to kick my data hoarding down the road a bit until I decided how to handle the dataset/Zvol situation.
Testing
This step in the process was massively important. I had a really important question that I needed to answer, and I couldn’t find any documentation talking about it.
So, from previous experience with iSCSI, I knew that since it was a block device shared across the network (as opposed to file-level), after connecting to the iSCSI share I would need to a) partition it and b) create a filesystem on it, before mounting the disk and using it as normal. With thick provisioning, this is easy - I simply partition it, create the filesystem, mount it, and go on my merry way. If I allocated 1TB, the disk shows up as 1TB. Easy.
But what happens due to the on-demand allocation? TrueNAS still asks me to define a size for the Zvol even if choosing sparse (a.k.a. thin). There are really only two options: under-provision or over-provision. I had to assume that since the initiator machine (my mini PC running all the services) mostly treated the drive as a normal lettered drive, it would assume whatever size presented to it to be the actual size of the disk. Thus, my guess was that in the case of under-provisioning, you would have to manually upsize the disk every time you exceeded capacity (I had to do this once already). Not something I was keen on doing. I wasn’t sure what would happen with over-provisioning just yet.
So I tried a test Zvol with a size of 1GB and sparse (thin) provisioning, connected to it, partitioned, filesystem-d, mounted, and tried to copy a directory larger than a gig. As I suspected, I simply received no space left on device errors.
But what about over-provisioning? I tested this - same thing as previously but I set the size to something stupidly large like 100TB. All the other steps worked just fine, and the mini PC simply treated the disk as if its actual size were 100TB. I know of no practical way to test what happens when I run out of space on the NAS itself when the mini PC still thinks there are 50+ gigs of space left on the device, but if that happens I think I have bigger fish to fry.
A brief note before continuing: please take this with about twice the daily recommended sodium intake - I don’t know what I am talking about here. At the moment it seems to work fine for my purposes, and solves the immediate problem of freeing up my 90% utilized capacity. But something is nagging, I feel like this is not good practice and I might run into issues down the road when I start to have >50% used capacity. I’m hoping that will take enough time for me to learn what not to do, but I guess we’ll see.
From what I’ve read I’ve seen iSCSI used for VM disks, and I can easily see how running 10 or 100 VMs all with thin provisioning and losing track of your available storage space could easily turn disastrous. For my purposes, I’m using one drive, and I know that the available space as displayed by the PC mounting the drive is simply false, so I don’t think I’m going to run into any issues that I wouldn’t already be running into that I would have been with the massive capacity usage.
Disclaimer aside: I could now continue with my plan of attack.
Creating the new Zvol
TrueNAS web interface:
- Datasets → select the primary pool on RAIDZ array → Add Zvol
- Give it a name
- I had to tick the box for “Force creation” because I was still over 80% allocated capacity at this point
- Size: something larger than usable capacity but probably a good idea to keep it within reason - I used 50TB
- Check the box for “Sparse” - this is what enables thin provisioning!
- Set any encryption options before creation.
- Save.
Creating the iSCSI share
Also in the TrueNAS web interface:
- Shares → Block (iSCSI) Shares Targets → Wizard
- Create or Choose Block Device
- Give it a name
- Extent type: Device, and select your newly created Zvol
- Sharing Platform: Modern OS
- Target: Create New
- Portal: For my case, I used a recently created dedicated storage network interface (see this post covering it), and already had an iSCSI portal created for it
- Initiators: leave blank
- Save
- Create or Choose Block Device
Note that I didn’t need to set a lot here as I’ve already done the base configuration of iSCSI in posts I’ve already mentioned: CHAP authentication, restricting to specific IPs, putting the shares on a dedicated storage network (or dedicated-ish, will come back to this later…).
Connecting to new iSCSI share and copying over data
Now on the “initiator” machine - my mini PC.
I made sure all my docker services were stopped, mpd
was not active, and sudo lsof /media/shared
gave no output. It probably would have been fine, but I didn’t want to run any risks.
Then it was a matter of reviewing the previous iSCSI blog post I made to put together the following:
# discover the new iSCSI share
sudo iscsiadm --mode discovery --type sendtargets --portal "10.20.99.37"
# CHAP configuration already done in my case.
# connect
sudo iscsiadm --mode node --targetname "iqn.2005-10.org.freenas.ctl:<iscsi-name>" --portal "10.20.99.37:3260" --login
# get drive letter
lsblk
# partition disk
sudo fdisk /dev/sdX
# in fdisk:
# in my case, I had to press "g" for GPT - this was important! Otherwise the drive only showed up as 16TB. (Of note, I didn't have to do this last time despite provisioning a disk with 29TB of space. There may have been a configuration difference that I did not catch.)
# other than that, simply create a single partition covering the entire disk
sudo mkfs.ext4 /dev/sdX1
# mount it somewhere other than the current share directory
sudo mount /dev/sdX1 /mnt
Right now this is temporary. We’ll come back later and replace all the configuration and edit /etc/fstab
to mount this new share where we currently have our old share. At this point, the old share is still mounted.
# copy the data!
sudo rsync -ah --progress /media/shared/* /mnt
Regarding the above, I was trying to monitor progress - hence the rsync
command with the --progress
flag - but that didn’t really give me an idea of overall progress, just an endlessly scrolling screen of individual file progress. I ended up using my tried and true method of sudo watch du -skh /destination/directory
(also discovered sudo df -BG /destination/directory
for a quicker method), and comparing the size manually to the size of the target directory.
Also, I had to leave midway through this process (which ended up taking several hours) and take my laptop with me - took a few minutes to figure out how to suspend the process so I could disconnect from the SSH session and not nuke all the copy progress I had thus far, and I hadn’t thought ahead and started a tmux session prior to running the rsync command. But this shall serve me well in the future! It boiled down to <C-z>
to suspend the process, then bg
to resume it in the background, and disown
to make sure it was no longer tied to that terminal instance.
Verify and replace
Notably rsync
maintains the file ownership and permissions during the copy, god bless. I was expecting to need to run a recursive chown
and chmod
after copying, and honestly wasn’t sure how long that would take on 1.5TB.
Now it was just a matter of cleanup. All of these commands are elsewhere in my documentation, I will simply outline the steps I took below.
- Unmount old drive.
- Log out of old iSCSI target with
iscsiadm
. - Unmount new iSCSI target and re-mount it to the old directory.
- Start docker services and verify that all is functional with the new share.
- Verify the new target is set to automatic login in its config file.
- Edit
/etc/fstab
: change the UUID in the old_netdev
line to match the new disk. - Remove the iSCSI share and any associated configurations in the TrueNAS web interface.
- Remove the thick provisioned Zvol in the TrueNAS web interface.
- Breathe a sigh of relief as your used space drops from 88.8% to 4.2%.
Wrap-up
Best of both worlds here. I get the continued benefit of not having to fuck with NFS and file permissions/ownership across at least two machines - the biggest reason I opted for iSCSI in the first place - and the relief of having on-demand allocation for any further datasets I wish to create on the NAS.
Next step is getting a dedicated NIC for the NAS and running a cable directly to my mini PC’s secondary NIC. We will see if that works how I think it’s supposed to work…
EOF