Running 21.02, and I have 8x6TB drives in one raidz2 vdev.
Unix nerd, but haven't spent much time with zfs before this.
Some things I'm looking at making better:
a) My chassis can handle 4 more drives. I would like to add those 4 drives for additional space, and in an ideal world, I would just add more drives and it'd give me more space. My understanding is that is not really how it will work (raidz expansion seems still on the eventually list), so I'm trying to figure out the best/cheapest way to do this.
Is the only real (cost effective) way to do this to get 4 new drives that are able to take all of the content from the existing drives (so, 4x12T), add a new vdev, move everything to that, then delete the old vdev, and make 2 new vdevs of 4x6, and add those, and then over time as the 6T drives die, replace them with larger drives? Open to fancy migration strategies as long as the data security is still there.
I think if I had done two 4x6 raidz1 at the outset, this would have been easier, but c'est la vie.
b) I didn't really grok datasets when I started this, so I have things stored in directories under the bulk dataset rather than their own dataset, and am migrating out of that (make dataset, mv $olddir $dataset), but for the virtual machines I set up, they are a zvol (/dev/zdNNNparti) under the bulk dataset, so it's not (afaik) the same move. Is there a standard/common way to migrate zvol to the new dataset? Downtime isn't a big issue if I need to do some kind of import/export, but would prefer not to for some of the larger ones.
c) This one is a little more esoteric: while I'm figuring out all the redundancy/exposure/confidence stuff, I would like to back up to a separate system (once all is understood I'll probably have just two systems with one doing snapshots to the other). I have a pretty big ceph cluster, and I think the simplest thing is to just have ceph expose (via a vm) a large partition, and do zfs send there. Are there other (maybe better, maybe just different) ways to do that? In theory I could make a bunch of ceph rbd devices and use those in a separate zfs setup, but that seems a lot fussier.
Thanks for any ideas, or course corrections where I am way off base.
Unix nerd, but haven't spent much time with zfs before this.
Some things I'm looking at making better:
a) My chassis can handle 4 more drives. I would like to add those 4 drives for additional space, and in an ideal world, I would just add more drives and it'd give me more space. My understanding is that is not really how it will work (raidz expansion seems still on the eventually list), so I'm trying to figure out the best/cheapest way to do this.
Is the only real (cost effective) way to do this to get 4 new drives that are able to take all of the content from the existing drives (so, 4x12T), add a new vdev, move everything to that, then delete the old vdev, and make 2 new vdevs of 4x6, and add those, and then over time as the 6T drives die, replace them with larger drives? Open to fancy migration strategies as long as the data security is still there.
I think if I had done two 4x6 raidz1 at the outset, this would have been easier, but c'est la vie.
b) I didn't really grok datasets when I started this, so I have things stored in directories under the bulk dataset rather than their own dataset, and am migrating out of that (make dataset, mv $olddir $dataset), but for the virtual machines I set up, they are a zvol (/dev/zdNNNparti) under the bulk dataset, so it's not (afaik) the same move. Is there a standard/common way to migrate zvol to the new dataset? Downtime isn't a big issue if I need to do some kind of import/export, but would prefer not to for some of the larger ones.
c) This one is a little more esoteric: while I'm figuring out all the redundancy/exposure/confidence stuff, I would like to back up to a separate system (once all is understood I'll probably have just two systems with one doing snapshots to the other). I have a pretty big ceph cluster, and I think the simplest thing is to just have ceph expose (via a vm) a large partition, and do zfs send there. Are there other (maybe better, maybe just different) ways to do that? In theory I could make a bunch of ceph rbd devices and use those in a separate zfs setup, but that seems a lot fussier.
Thanks for any ideas, or course corrections where I am way off base.