Which of the following ZFS layout configurations do you go with and why for 36 disks?

Status
Not open for further replies.

soulburn

Contributor
Joined
Jul 6, 2014
Messages
100
Given 36 total disks in a single chassis with one E5-1620 v3, 128 GB RAM, and a single 9211-8i HBA, which of the following ZFS layout configurations do you go with and why?
  1. 3x 11 disk vdevs in RAID Z3 with 3 hot spares (3 disks can fail per vdev, 3 hot spares)
  2. 6x 6 disk vdevs in RAID Z2 (2 disks can fail per vdev, no hot spares)
  3. Some other configuration?
The way I see it is that though both configurations have 24 disks of usable space, I am guessing that the 6x 6 disk vdevs in RAID Z2 would be better because it can take 33% disk failure per vdev vs. 27% for the 3x 11 disk vdevs in RAID Z3, would be faster, have better IOPS due to having double the amount in vdevs, and be faster to resilver when a disk fails. Does this make sense?

Which do you go with and why? If neither is ideal, is there another option I'm missing?
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I'd say option 2 if you have tested cold spares on a shelf and you can physically replace the drive in less than 2 or 3 days. Else I'd say option 1.

What will be the typical usage of this server? (VM, backup, media, ...)
 
Joined
Jan 9, 2015
Messages
430
Perhaps 4 - 9 disk Z2. 28 useable drives.

Like @Bidule0hm asked, what the server will be used for should be the driving force behind the decision.
 
Last edited:

soulburn

Contributor
Joined
Jul 6, 2014
Messages
100
I'd say option 2 if you have tested cold spares on a shelf and you can physically replace the drive in less than 2 or 3 days. Else I'd say option 1.

What will be the typical usage of this server? (VM, backup, media, ...)
Hi Bidule0hm,

Thanks for your reply. I am happy to hear that it seems I'm on the right track with my pick. This is for a homelab so it's nothing critical and it will be easy to replace dead disks. Usage will mainly consist of media sharing and an iSCSI target for VMware datastores served out to an HA cluster. A big part of the homelab will be to sharpen my professional skills and for learning things for various professional certifications. Plex and other programs will be run on the hosts in the HA cluster.
 

soulburn

Contributor
Joined
Jul 6, 2014
Messages
100
Perhaps 4 - 9 disk Z2. 27 useable drives.

Like @Bidule0hm asked, what the server will be used for should be the driving force behind the decision.

Thanks for the reply, DifferentStrokes. I had not even considered that since it would be "non optimal", but I guess the truth is that it doesn't even matter anymore with compression being enabled. That's also attractive as I can have 28 usable drives. If we're going with 4x - 9 disk options, I guess another configuration would be 4x- 9 disk z3. Hmm...
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I'd go with 2 pools. Possibly a big one using RAIDz2 or RAIDz3, depending on your risk tolerance. And, a smaller one for iSCSI usage. For the latter, use striped mirrors.
 

soulburn

Contributor
Joined
Jul 6, 2014
Messages
100
I'd go with 2 pools. Possibly a big one using RAIDz2 or RAIDz3, depending on your risk tolerance. And, a smaller one for iSCSI usage. For the latter, use striped mirrors.
Yes I totally forgot about that, too! Thanks for reminding me. So now I can add the following to my list of options:

4x drives in stripped mirror for the iSCSI target for the VMware datastore. (2 drives usable. Well really in this case 1 drive usable because it's not recommended to go over 50% storage capacity when using iSCSI on ZFS.)
Throw the other 32x drives in 4x 8 disk vdevs in raid z2. (24 drives usable)

36x drives in 4x 9 disk vdevs in raid z2. (28 drives usable)

36x drives in 4x 9 disk vdevs in raid z3. (24 drives usable)

It seems I'm just going to have to build each one and test, which I'm definitely not looking forward to doing...
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
It seems I'm just going to have to build each one and test, which I'm definitely not looking forward to doing...
I would say @gpsguy nailed it. You can't optimize one pool for space, reliability and performance at the same time. However, with more than one pool you can try to optimize each pool for a different application, and you won't have to test every possible layout.
it would be "non optimal", but I guess the truth is that it doesn't even matter
Yes, forget about optimizing # of drives per vdev.
 
Status
Not open for further replies.
Top