Extending a pool

brettyj

Dabbler
Joined
Jul 30, 2020
Messages
23
Hey,

I'm pretty sure what I'm planning to do is fine, but it's the first time I've done it since setting up the nas last year so just want to double check, and make sure my knowledge is refreshed before I go in gung ho.

I currently have a pool consisting of 3 VDEVs:

VDEV1 - Mirrored
2x 8TB
VDEV2 - Mirrored
2X 8TB
VDEV3 - Mirrored
2X 12TB

To create a mirrored stripe RAID10 style pool. I have ordered two new 16TB drives and plan to add them as a new mirrored VDEV. Just to confirm this will extend the pool, and work completely fine? Is there anything I need to consider before doing it?

Thanks!
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi,

Indeed, you can create a new vdev and add it to the pool. You also respect the idea of having all of you vdev based on the same model (here, mirrors). A pool would work with a mirror and say a RaidZ2 vdev but this is not recommended.

The drawbacks for your case are :
--The actual data will not be moved
What is on your actual drives will stay there.
--The new vdev will concentrate almost all new activity
ZFS tries to balance the use of its vdevs. Because the new one will be empty and is also much bigger than the others, basically everything new will go on these 2 new drives. As such, your performance will not be optimal because IO will not be distributed.

An option could be : replace one 8TB drive with a 16TB drive. That will put some existing data on it.
Replace the second drive in that mirror with your second 16TB drive. That will auto-expand that vDev to 16TB total.
You can then re-add the 2 8TB drives you removed in a new vdev. It will still be empty and will concentrate new writes, but being much smaller than the others, it will not be as bad.
 

brettyj

Dabbler
Joined
Jul 30, 2020
Messages
23
Hi,

Thanks for the reply. The performance shouldn't be too bad though should it? This is a mirrored VDEV running EXO (7200RPM) drives, so it should still be pretty nippy, just not as good as it could potentially be with the data spread across every VDEV. I suspect for my use case it would be more than sufficient? I could always add another VDEV with two more 16TB drives in the next few months and then performance will be pretty good right?

Thanks
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Performances will not be terrible but will not be ideal either. Usually, we consider that a pool has the performance of as many HDD as it has vdev. So if a pool is a single 8 drives RaidZ2 vdev, it has the performance of a single drive. If a pool is made of 4 vdev mirrors, it should offer about 4 times the performance of a single drive (read and write 4 times faster).

Here, you will have these 4 vdevs but you will not have the performance of about 4 drives. The reason is that almost all writes will happen on a single vdev, so they will have the performance of a single drive. As for the reads, everything new having been concentrated on a single vdev, it will also read at the speed of a single vdev. As for everything existing, the performance will be about that of 3 drives instead of 4.

By using auto-expand on one of your 8TB vdev and adding a 8TB mirror instead of adding a single 16TB mirror, you will reduce the gap.

Say you pool is loaded of to 50%. Should you do it your way, the next 8TB you write will be concentrated on your new vdev. 8 TB out of the actual 14 TB (if loaded at 50%), so you will not be re-balanced until you add more than 50% of what is already in your pool. And again, new data will be on 1 vdev and old data one 3. Never will you be on all 4 vdevs.

Should you do the auto-expand first and re-add after, thing will be better. Your 14 TB will be 6TB on the 12TB vdev, 4TB on the 8TB vdev and 4TB on the now 16TB one. So your new writes will be splitted bewteen 2 vdevs (the 16TB and the new 8TB) instead of being only on one of them.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you really want/need the performance (IOPS) of 4 mirrors, you should first get identically sized disks for all mirrors and then re-load all data into the pool so it's evenly spread.

With unequal mirror sizes in the pool, you're never likely to get to peak IOPS (nor throughput) speeds.
 
Top