Pool layout feedback

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
Seeking to understand the pros and cons of configuring a pool.

The pool will contain 16 SSDs, 8 will be 1TB and the other 8 at 2TB. All IornWolf. I was thinking of doing 4 vdevs of RAIDZ1.

Question and clarification needed:

  1. does RAIDZ1 for 4 vdevs mean that each vdev will accept 1 disk failure or does this mean the entire pool is only acceptable to one disk failure?

  2. If I was ever to upgrade 4 of the 1TB vdevs, can I safely export those disks (all at once or one at a time while replacing each disk) and the data will be retained on the other disks in the vdev?

  3. What would be the way to configure now for an upgrade in the future?

The storage contain is for VMs and storage but for a development environment. Do I care about losing data? Yes, to a point but again, it is a dev environment for my techs to spin up VMs, learn ESXi and vSphere as well as other applications and services they want to tinker with.

If there is a more reasonable way to layout the pool (or pools) I'm open to listen, understand and be educated upon.

Thanks for your time
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
does RAIDZ1 for 4 vdevs mean that each vdev will accept 1 disk failure or does this mean the entire pool is only acceptable to one disk failure?
I think you mostly understood it...

This is what you will get with 2 drive failures at the same time... red is pool loss, yellow is a degraded pool:

1648672420515.png


If I was ever to upgrade 4 of the 1TB vdevs, can I safely export those disks (all at once or one at a time while replacing each disk) and the data will be retained on the other disks in the vdev?
You can resilver one at a time and it will be fine, not all at once. (if you have space and ports available to do it, you can eliminate the risk of no redundancy by replacing the disk while it's still in the pool, otherwise, you risk pool loss while the resilvering happens)

  1. What would be the way to configure now for an upgrade in the future?

The storage contain is for VMs and storage but for a development environment. Do I care about losing data? Yes, to a point but again, it is a dev environment for my techs to spin up VMs, learn ESXi and vSphere as well as other applications and services they want to tinker with.

If there is a more reasonable way to layout the pool (or pools) I'm open to listen, understand and be educated upon.
If you want to see any kind of performance that won't suck, you'll need to do mirrors. RAIDZ1 is no good for block storage behind VMs... https://www.truenas.com/community/threads/the-path-to-success-for-block-storage.81165/
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
@sretalla

Thank you for the reply. What I understood from your graph is:

If a single disk fails in vdev, the the zpool will go to degraded. Should 4 disks fail in a vdev, the zpool fails. Should more than 4 disks in 4 separate vdevs fail, the pool fails.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
If a single disk fails in vdev, the the zpool will go to degraded.
Right

Should 4 disks fail in a vdev, the zpool fails.
Well... Yes but that will be true as soon as 2 failed disks in a Raid-Z1 vDev. No need to loose them all.

Should more than 4 disks in 4 separate vdevs fail, the pool fails.
Actually, the moment any single vDev looses 2 drive, the pool is lost.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What's important to think about is how happy you are with these equations:

16 x 15 (240) ways 2 disks failing can go...

48 ways you can lose the pool with just 2 disks failed (the red cells)

240 - 48 (192) ways you can lose 2 disks without losing the pool

20% chance that the second lost disk kills your pool, the taking of that risk is up to you...

If I drew this out with mirrors or 2x RAIDZ2 it would be very different.

Back in a few minutes with that
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
For Mirrors:
1648736885660.png


16 x 15 (240) ways 2 disks failing can go...

16 ways you can lose the pool with just 2 disks failed (the red cells)

240 - 16 (224) ways you can lose 2 disks without losing the pool

7% chance that the second lost disk kills your pool

Bonus: much better IOPS performance and faster resilvering (and you could designate a spare or 2 to the pool to further reduce risk).

You lose 50% capacity to redundancy... but ... IOPS!!!
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
And just for fun...

1648737344043.png


No chance that any 2 disks failing take out your pool. But still not great for IOPS/block storage

Good for avoiding pool loss while retaining some economy on cost/capacity.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
sigh'

R.f1696a60e54271433f1a8050eff6b90f

Why did you have to bring math into this? Ugh....

Jokes aside, I see you point and see the risk you are expressing. I'll have to see what the overall storage would look like with your proposal.

What would be a good zraid calculator to use for this since I have a split amount of disks with different sizes? I use the following to sites.

 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
What would be a good zraid calculator to use for this since I have a split amount of disks with different sizes?
The Wintelguy one is great.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
1648738463414.png

Using your "ZRAD2 with 2 vdevs" graph to calculate storage and the above pictures, is it accurate to say combining TUSC of config 1 and 2 would be accurate, so 18TB of usable?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
is it accurate to say combining TUSC of config 1 and 2 would be accurate, so 18TB of usable?
Looks reasonable to me.

Keeping in mind I'm not recommending using RAIDZ2 for block storage.... (just making sure I was clear... it's your hardware and your team suffering the poor performance, so entirely your choice)
 
Top