Expand special vdev/drive

ajp_anton

Dabbler
Joined
Mar 6, 2017
Messages
11
I'm running TrueNAS as a VM inside Proxmox, and I'm running low on capacity on the special vdev.

Below is the zpool list -v output of the situation. Not sure if this is the best way to do things, still trying things out, feel free to suggest changes:
Code:
zpool list -v storage-pool
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
storage-pool                              36.4T  23.9T  12.4T        -         -     0%    65%  1.00x    ONLINE  /mnt
  raidz1-0                                36.4T  23.9T  12.4T        -         -     0%  65.8%      -    ONLINE
    d93bf161-5921-43c6-a8e6-83a86630d892      -      -      -        -         -      -      -      -    ONLINE
    d275206e-b995-4a35-a041-2020e704bc5f      -      -      -        -         -      -      -      -    ONLINE
    4836167a-d461-4f98-875c-02a5028bf544      -      -      -        -         -      -      -      -    ONLINE
    ccf1af98-be9d-4ef4-912f-d3f6406eeab9      -      -      -        -         -      -      -      -    ONLINE
special                                       -      -      -        -         -      -      -      -  -
  b58620dd-a4ed-4600-8a70-afdf6e511a8a    15.5G  13.1G  2.37G        -     15.5G    74%  84.7%      -    ONLINE
Main storage is on a RaidZ on large HDDs, on a passed through HBA, so TrueNAS gets direct access to them. As you can see, the special vdev is on a single striped disk. Not sure what's recommended here in a VM environment, but that 16GB drive is actually on a mirror inside Proxmox. I need lots of small drives like this in my VMs (and also other pools within TrueNAS), which is just easier to do by having a pair of "large" 480GB SSDs in a mirror and splitting that up into smaller virtual disks within the host.

Anyway, to the problem. As you can see, the special vdev is reaching its capacity. Obviously I sized that disk a little too low. I've already expanded it in Proxmox to 32GB, and TrueNAS knows about that extra capacity:
Code:
lsblk -l  /dev/sdd
NAME  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sdd     8:48   0   32G  0 disk
sdd1    8:49   0   16G  0 part
How to make the ZFS pool make use of it? I tried the Expand Pool option in the GUI, I tried zpool export/import, and zpool online -e. Should I create a completely new 32GB drive and somehow move the contents of the old special vdev to that?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
You are somewhat shooting yourself in the foot by using a piece of an external software RAID as a ZFS vDev member. If you have a single problem, like checksum, in the special vDev, it is possible to loose the entire pool. Remember, the whole point of avoiding hardware RAID, (or backend software RAID), is that those RAID methods may perform actions that ZFS does not want. Like out of order writes, or not honoring write barriers.

The recommended protection level for a special vDev, is the same as the data portion of the pool. In your case, you have RAID-Z1, which is 1 disk of redundancy. So the special vDev recommendation would be a Mirror.


Back to the question at hand.
The partition "sdd1" was not grown. Don't know if manually changing the partition size to 32GBs will solve your problem, (or make things bad).

You could also supply a second slice of 16GBs to the VM, and add that as a stripe to the existing special vDev.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Adding to @Arwen's correct assertions about data safety - if your special vdev runs out of space, ZFS will just use any other vdev in that same pool. So this is the least of your concerns.
 

ajp_anton

Dabbler
Joined
Mar 6, 2017
Messages
11
You are somewhat shooting yourself in the foot by using a piece of an external software RAID as a ZFS vDev member. If you have a single problem, like checksum, in the special vDev, it is possible to loose the entire pool. Remember, the whole point of avoiding hardware RAID, (or backend software RAID), is that those RAID methods may perform actions that ZFS does not want. Like out of order writes, or not honoring write barriers.

The recommended protection level for a special vDev, is the same as the data portion of the pool. In your case, you have RAID-Z1, which is 1 disk of redundancy. So the special vDev recommendation would be a Mirror.


Back to the question at hand.
The partition "sdd1" was not grown. Don't know if manually changing the partition size to 32GBs will solve your problem, (or make things bad).

You could also supply a second slice of 16GBs to the VM, and add that as a stripe to the existing special vDev.
Is it better to keep those two SSDs separate in Proxmox, and create each virtual drive needed in pairs, one on each SSD, and do the mirroring in TrueNAS instead?

My reasoning was that I if there was a problem with one of the drives, Proxmox would tell me about it, I would replace it, resilver it, and none of the VMs would be the wiser. If I instead handled the mirroring in each virtual drive separately, If I had to replace the drive, I'd have to replace and resilver maybe ten virtual drives everywhere I was using them. I thought the redundancy was the same, as it's the same ZFS mirroring happening just once in a centralized way on the host, instead of multiple times in each VM.

Anyway, I got the resizing done, thanks for the tip on what to google (resize partition). Just to be safe, I exported the pool (not sure if necessary), used parted to expand the partition, and imported the pool. Still no change, but then used another zpool online -e on the drive, and this time it finally used the whole drive.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
Glad the resize of the partition and growing your special vDev went smooth.


As for whether to use 2 separate SSD pieces, and supply those to the VM. Or continue with what you have. It's a toss up. If you are really using ZFS at the Proxmox level for Mirroring the 2 SSDs, then in theory, you have it covered.

Except that TrueNAS does not know that the drives are virtualized. I don't know the answer. It's possible that using a ZFS Mirror and supplying the TrueNAS VM with a zVol, where then TrueNAS uses that storage as a special vDev, could cause problems. Under normal conditions, a VM's storage that is not passed through, should be "sync=always". Yet, in this case, it's not datasets but a piece of the pool, the special vDevs.

So, I really don't know.

As some would say, Here be Dragons, (aka use at your own risk).

Good luck.
 
Top