Advice on getting my setup to a better state: adding drives/vdevs, migrating between datasets, potential backup optionsf

marmoset

Dabbler
Joined
Dec 18, 2020
Messages
27
Running 21.02, and I have 8x6TB drives in one raidz2 vdev.

Unix nerd, but haven't spent much time with zfs before this.

Some things I'm looking at making better:

a) My chassis can handle 4 more drives. I would like to add those 4 drives for additional space, and in an ideal world, I would just add more drives and it'd give me more space. My understanding is that is not really how it will work (raidz expansion seems still on the eventually list), so I'm trying to figure out the best/cheapest way to do this.

Is the only real (cost effective) way to do this to get 4 new drives that are able to take all of the content from the existing drives (so, 4x12T), add a new vdev, move everything to that, then delete the old vdev, and make 2 new vdevs of 4x6, and add those, and then over time as the 6T drives die, replace them with larger drives? Open to fancy migration strategies as long as the data security is still there.

I think if I had done two 4x6 raidz1 at the outset, this would have been easier, but c'est la vie.

b) I didn't really grok datasets when I started this, so I have things stored in directories under the bulk dataset rather than their own dataset, and am migrating out of that (make dataset, mv $olddir $dataset), but for the virtual machines I set up, they are a zvol (/dev/zdNNNparti) under the bulk dataset, so it's not (afaik) the same move. Is there a standard/common way to migrate zvol to the new dataset? Downtime isn't a big issue if I need to do some kind of import/export, but would prefer not to for some of the larger ones.

c) This one is a little more esoteric: while I'm figuring out all the redundancy/exposure/confidence stuff, I would like to back up to a separate system (once all is understood I'll probably have just two systems with one doing snapshots to the other). I have a pretty big ceph cluster, and I think the simplest thing is to just have ceph expose (via a vm) a large partition, and do zfs send there. Are there other (maybe better, maybe just different) ways to do that? In theory I could make a bunch of ceph rbd devices and use those in a separate zfs setup, but that seems a lot fussier.

Thanks for any ideas, or course corrections where I am way off base.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,110
Raidz2 is safer than 2*raidz1, so no regret here.
Is the only real (cost effective) way to do this to get 4 new drives that are able to take all of the content from the existing drives (so, 4x12T), add a new vdev, move everything to that, then delete the old vdev, and make 2 new vdevs of 4x6, and add those, and then over time as the 6T drives die, replace them with larger drives? Open to fancy migration strategies as long as the data security is still there.
That's actually making a new pool (in this NAS or in another), moving data, deleting the old pool and creating a new pool with the desired geometry. You cannot delete raidz# vdevs in a pool or change their geometry (Z level, number of drives).

Considering that raidz2 is good for bulk storage but that your zvols would do better on mirrors
my suggestion would be to keep your 8-wide raidz2 for storage, replace the drives by larger ones (say 8*12 TB to double the size) and create another pool of two 2-way mirrors for the zvols (possibly repurposing the 6 TB drives here). So:
1/ Buy 4*12 TB, replace (in one go) 4 out 8 6 TB drives with them, make the 2*(2*6 TB) pool and gain the space freed by the zvols.
2/ Buy another 4*12 TB, replace the last 6 TB in raidz2 with them (one at a time) and eventually get 6*12=72 TB of raw storage space on the raidz2 (ca. 60 TB usable). Plus 12 TB raw (ca. 6 TB usable with 50% occupancy) for zvols, and four 6 TB spares.
Backup both pools to the ceph cluster.
 

marmoset

Dabbler
Joined
Dec 18, 2020
Messages
27
Er, sorry, yes, messed up what I was asking, but you understood. Since I *can't* expand vdevs, was wondering if the new pool was the way to go.
 
Top