3 x 3 mirrors (14TB each drive) - any concerns to be aware of?

tsm37

Dabbler
Joined
Feb 19, 2023
Messages
46
I'm close to wipe out my test environment and do a clean-install to make my truenas scale box to go live. I'm looking to do 3 x 3 mirror using six western digital red 14 TB hdd drives. Given the size of each drive, should I consider anything with respect to performance if at some point, I need to replace a drive for whatever reason and need to do resilvering?

Secondly, I plan to just have 3 vdevs (3 x 3 mirrors) in this hdd pool. Again, given the size (14TB for each drive), is keeping this pool with 3 vdevs only acceptable? I heard that there is a greater risk of adding more vdevs into a single pool but I just want to keep it at 3 at this point. Please shed some light. Thank you
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
With 6 x 14TB drives, that is 3 x 2 way Mirrors, not 3 x 3 way Mirrors. Or am I missing something?

Using larger disks, (aka greater than say 2TB), their is a risk that re-silvering a disk during replacement will detect another bad block on the remaining Mirror disk. Thus, data loss. So 3 way Mirrors are suggested in some cases.


As for having lots of vDevs in a pool, that is fine as long as they have the same level of redundancy. Mixing redundancy is not recommended in the same pool. Lots of people use wide stripes of Mirrors for iSCSI and VM storage. Better performance and data storage overhead.


Last, even though Western Digital Red 14TB drives were CMR in the past, check them out just before you buy them. WD snuck SMR drives in to the Red line. This forced them to create a Red Plus line that absolutely does not have any SMR drives. SMR drives, especially from WD, are really not recommended for ZFS.
 

tsm37

Dabbler
Joined
Feb 19, 2023
Messages
46
Yes, sorry. I meant 3 x 2 way mirrors. Thanks for chiming in. As for the WD Red drives. I'm retiring my Synology and bringing over the existing WD drives to the truennas box. Luckily, they're all CMR.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If re-using WD Red 14TB drives, then they are probably CMR, (which is fine).

One thing you can do to mitigate disk replacements, is make sure you have a spare disk slot. A 7th disk slot in your case. If a disk that needs replacement has not failed yet, you can replace in place. This allows ZFS to make a temporary 3 way Mirror of the replacement and failing disk, plus the remaining good one. If their is a bad block on the "good one", you might luck out and that block might be available on the "failing disk".

Thus, the replacement disk gets gets all the good blocks from the vDev that are possible.


This feature, which I call "replace in place", was somewhat unique to ZFS at the time ZFS introduced it. I personally experienced a hardware RAID-5 set that could not be recovered because their were bad blocks on multiple disks. Remove one, you introduced data loss. It ended up being a full backup, fix the RAID-5 and restore time. Only possible because the bad blocks were not in the same RAID-5 parity block. (Later that vendor introduced Patrol Reads, similar to ZFS scrubbing, to detect the problem sooner.)
 

tsm37

Dabbler
Joined
Feb 19, 2023
Messages
46
Thanks again for the additional input. Yes, I do have extra slots available on my new truenas scale box , and I like the idea of adding a spare disk for the scenario that you described. Much appreciated.
 
Top