Extending Mirrored vdev pool

Longun

Cadet
Joined
Oct 23, 2019
Messages
3
Hi All,

Just a quick one. I've got a multi mirror vdev pool made up as follows.

vdev1 2x3TB
vdev2 2x3TB
vdev3 2x3TB

I've managed to scavenge another 2x3TB drives and was just planning to extend with another mirrored vdev. How is the data striped over the existing disks and then the new ones? I'm getting good throughput on the 2 mirrors 450MB/s as it writes over the 3 mirrors but how will it behave with an empty vdev being added if data is already striped over the existing mirrored pairs?

Cheers
 

Longun

Cadet
Joined
Oct 23, 2019
Messages
3
Can't seem to edit. It should read:

I've managed to scavenge another 2x3TB drives and was just planning to extend with another mirrored vdev. How is the data striped over the existing vdevs and then the new one? I'm getting good throughput on the pool (450MB/s) as it writes over the 3 mirrors but how will it behave with an empty vdev being added if data is already striped over the existing 3x mirrored pairs? Will it just start writing over the 4 mirrors moving forwards?

I can remove the pool and recreate if it'll be a big problem but that'll take a while as I need to pull 5TB of data back out rebuild and then put it back again.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi @Longun

New write behavior will depends on how full your existing vdevs are. ZFS will try to put "a few more writes" to your new vdev, so to speak, because it will have proportionally more free space - but it won't simply ignore your existing vdevs and bottleneck your speed by "how fast can these two drives write." With about 5TB total, that means you'll have roughly 1.66TB per vdev used, around 28% full. You may not reach the full 600MB/s that might be implied ("3 vdevs is 450MB/s, 4 is 600MB/s") but even at the worst case scenario it should be "just as fast."

Also, it will not re-stripe the existing data; there is no option in current ZFS for that. If you want the data evenly redistributed, you'll have to pull the data off/rebuild from zero/put it back as you suggested.
 

Longun

Cadet
Joined
Oct 23, 2019
Messages
3
Hi @HoneyBadger thanks for the info. Not too worried about hitting the full speed constantly, its an occasional use case when I'm running it at full whack, generally when pulling some 4K videos off the array to edit on my main rig, although I might try the odd edit with the data staying on the FreeNas if the speed is good enough. FreeNas and my editing rig are the only devices that operate at 10GbE. Everything else is switched at 1GbE so isn't capable of hitting the full theoretical throughput of the drives. DD does give me a solid 450MB/s with compression off though and I'm not far off that with a file transfer either!

Not too worried about full redistribution as long as its still balanced moving forward. Data gets backed off to a dumb nas and removed at times so it'll balance itself back out in a year.

Sounds like I'll be fine to extend the pool with the additional vdev. Thanks again.
 
Top