RAIDZ expansion, it's happening ... someday!

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I feel like people would use this feature without reading the best practices (as often are the case) and make their vdev really wide.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I feel like people would use this feature without reading the best practices (as often are the case) and make their vdev really wide.
ZFS has always handed the user a loaded gun and left them to either shoot bullseyes or their own feet.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I feel like people would use this feature without reading the best practices (as often are the case) and make their vdev really wide.
ZFS has always handed the user a loaded gun and left them to either shoot bullseyes or their own feet.
Yes, but this is a TrueNAS forum :smile:.

iX could easily have the GUI and or TUI, (aka Middleware CLI), put a limit of say 12 disks as the maximum. After that, you get a warning with a checkbox to "force" beyond 12 disks. That way the GUI attempts to "help" the users, (or Enterprise customers), but let them shoot themselves as desired.

Now whether the "suggested" limit be 12 disks or 10, (or even 11), might produce some discussion. Using 12 makes a bit of sense because some chassis have 12 disks or multiples of 12. Or just close to a 12 multiple, (like 26 disks).
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Yeah that's a smart idea. Maybe SCALE will do that.
 

Philip Robar

Contributor
Joined
Jun 10, 2014
Messages
116
I feel like people would use this feature without reading the best practices (as often are the case) and make their vdev really wide.
My 20 x 2Tb VDEV says hi. (Sometimes all you care about is maximizing usable space.)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
My 20 x 2Tb VDEV says hi. (Sometimes all you care about is maximizing usable space.)
In some cases a 20 disk wide RAID-Zx will work fine.

One thing RAID-Zx has over dRAID is that a small file in RAID-Zx might not consume an entire width. This leaves space for another small file, (or more), in the remaining storage of the "stripe". dRAID on the other hand, (if I understand it correctly), requires the entire stripe to be dedicated to a single file. Even a 1 byte file :-(.

However, under some conditions, extra wide RAID-Zx can slow down. So much so, that it gets ridiculous to continue writing new files. What is the actual cause of the problem is not clear because much less testing has been done with extra wide RAID-Zx vDevs. We only have anecdotal reports from people with single configurations, and less known usage cases.

Sometimes I wish someone would perform tests of extra wide RAID-Zx vDevs to find the reasons for this problem.


What I am trying to say, is that if it works for you, and continues to work for you, great.

But, it is not for everyone. (Thus, my "suggested limit" for RAID-Zx expansion with option to go beyond it.. And not a "hard limit".)
 
Top