Adding storage while changing vdev configuration?

the_jest

Explorer
Joined
Apr 16, 2017
Messages
71
I have a TrueNAS box with eight bays for spinning disks, which I use primarily for low-access media storage. My setup consists of one pool with two two-disk mirror vdevs, each made up of 10TB disks, thus using four bays; there are four empty bays. The entire pool is backed up to an external ext4 drive. (I do also have another pool of a mirrored vdev with two SSDs that I use for jails and VMs; these are attached to dedicated 2.5" bays, and are not the subject of this question.)

I'm approaching 70% capacity, and figured it's time to start at least thinking about expanding my storage. I don't want to simply add a vdev, thus locking me into a setup where I always need six disks; I also don't want to replace the 10TB disks with something larger, which would cost a lot for little gain, and then leave me with 10TB disks that I don't have anything to do with. (That is, buying two 12TB disks would give me only 2TB additional space and leave me with two unused 10TB disks.) I've also been thinking that I no longer want to waste the storage capacity of using mirrored vdevs, and I should move to some RAID solution; this pool isn't used for heavy tasks and I don't have big I/O needs, so giving away 50% of raw space for mirrors seems less than ideal.

Is there any practical way to change the topology of my existing setup without requiring me to build a new box or buy a bunch of disks that I'd only use for this process? Preferably then being able to re-use the existing disks. I know I could remove one disk from each vdev temporarily, and I do have the ext4 backup in case of an emergency during this stage, but I don't want to do anything _too_ crazy. What's my best path forward here?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Unfortunately, no. To change pool topologies from mirrors to RAID requires destroying and recreating your pool in the new topology.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Your best path may well be what Samuel suggests, but if you wanted to move to RAIDZ2 rather than RAIZ1, it can be done. In brief, here's how it would work:
  • Detach one disk from each mirrored vdev
  • With the two detached disks, two new disks, and two sparse files, create a new RAIDZ2 pool. Immediately offline the sparse files.
  • Using whatever method you like (I think I'd favor snapshot/replication), copy the data from the old pool to the new pool
  • Replace the sparse files with the two remaining disks from the original pool
  • Optionally, rename the new pool to match the old pool's name
Kind of tricky, involves some CLI-fu, and from step 1 through step 4, you really don't have any redundancy. But here's how to create the degraded pool:
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@danb35 is correct, that's the only way to do it without doing a full backup and restore.

@the_jest If you do that tricky method, make sure your Ext4 backup disk is up to date. And ideally, have a second backup disk.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
@the_jest, @danb35's procedure is for someone that really knows what they're doing. If you end up typo'ing some CLI-fu along the way, you could lose one or both of the degraded pools, and have to destroy both, recreate the new pool, and reload from backup. If you don't have backups, I wouldn't even contemplate doing anything until I had a couple of good backups on hand.
 

the_jest

Explorer
Joined
Apr 16, 2017
Messages
71
OK, thanks, all. I do have good backups, but doing something so risky probably isn't the best idea.

Maybe sticking with mirrored vdevs is easiest anyway; if I have an array that can't be grown without swapping out every disk in the array, that's not a great solution either. Hrm.
 
Joined
Jan 17, 2022
Messages
8
Your best path may well be what Samuel suggests, but if you wanted to move to RAIDZ2 rather than RAIZ1, it can be done. In brief, here's how it would work:
  • Detach one disk from each mirrored vdev
  • With the two detached disks, two new disks, and two sparse files, create a new RAIDZ2 pool. Immediately offline the sparse files.
  • Using whatever method you like (I think I'd favor snapshot/replication), copy the data from the old pool to the new pool
  • Replace the sparse files with the two remaining disks from the original pool
  • Optionally, rename the new pool to match the old pool's name
Kind of tricky, involves some CLI-fu, and from step 1 through step 4, you really don't have any redundancy. But here's how to create the degraded pool:
I'm trying to convert my current setup of Four 3 TB drives from mirror to RAIZ1 so that I can increase my usable TBs to 9. How do you immediately offline the sparse files, copy the data from the old to the new pool, and replace the sparse files? I see how to detach, then I'm a bit clueless as to how to proceed after that.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If you need to ask the question, you probably shouldn't even attempt it, but your questions are answered at the link you quoted.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Kenneth Barney - You can't.

It is NOT possible to convert 4 disks, in a 2 x 2 disk Mirrors, to a RAID-Z1 of 4 disks. If you detach the mirrors, that only gives you 2 disks, not the minimum of 3 needed for the sparse option for a RAID-Z1 outcome.

Plus, as @danb35 said, if you have to ask, it is almost certainly too complex for you to implement safely.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It is NOT possible to convert 4 disks, in a 2 x 2 disk Mirrors, to a RAID-Z1 of 4 disks.
It is, if the data could fit on one disk (or one vdev)--vdev removal is live (and in the GUI) in TN12. So remove one mirrored vdev, wait for that to complete, then offline the second disk in the remaining vdev. Now you have your three disks to build the degraded pool. Good idea? Almost certainly not, but it could be done.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@danb35 Excellent point. I had not considered that.
 
Top