dRAID Expansion Planning Question

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
Please help me understand if my DRAID expansion plan makes sense. Thank you!

This project is for my personal home use, but I've been running FreeNAS for about a decade. I have a newly acquired 45drives bare chassis, which in the end state I think I want it to have 4 arrays of 8x data and 3x parity, plus the last drive as a spare. If I'm not mistaken, in dRAID nomenclature this would be dRAID3:8:1. This results in 32x data, 12 parity, and 1 spare.

Now, in the RAIDZ3 format, the best plan to get a low cost, Z3 vdev like this would be to buy 45x small drives and when more capacity was required to slowly replace them one by one with new larger drives.* This is because you can't easily change the vdev stripe width. However, with dRAID, if I understand correctly, I can easily expand my stripe width vdev width* and data is rebuilt across it as if I put that many drive to start. This would enable me to spend my budget on larger capacity drives and slowly add to the array each year.

Thus, my plan is to use 18TB WD Gold SATA drives. They have lower power than SAS, which will save me money in direct power and cooling, but higher reliability and performance than Reds. My expansion plan is:

Year 1: Add 12x 18TB WD Gold creating an dRAID3:8:1 vdev. This is 8 data, 3 parity, and 1 spare.
Year 2: Add 11x 18TB WD Gold expanding the array, while keeping the dRAID3:8:1 vdev configuration. This is now 16 data, 6 parity, and 1 spare (in total).
Year 3: Add 11x 18TB WD Gold expanding the array, while keeping the dRAID3:8:1 vdev configuration. This is now 24 data, 9 parity, and 1 spare (in total).
Year 4: Add 11x 18TB WD Gold expanding the array, while keeping the dRAID3:8:1 vdev configuration. This is now 32 data, 12 parity, and 1 spare (in total).

After year 4 the array is fully built and any further expansion is another chassis for a cluster or slowly replacing all 45 drives with larger capacity units.

Is this an effective and safe strategy for my data? Thank you, your help is greatly appreciated.

Edit: corrected my vdev vs stripe width error for clarity.
*similar to manually expanding a RAIDZ3, but with automatic reflowing of data.
 
Last edited:

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
Reading the documentation, I should clean up my notation. Final would be: dRAID3:8d:1s:45c

Year 1: dRAID3:8d:1s:12c
Year 2: dRAID3:8d:1s:23c
Year 3: dRAID3:8d:1s:34c
Year 4: dRAID3:8d:1s:45c
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
dRAID isn't available yet in TrueNAS. According to OpenZFS, it's planned for release in OpenZFS 2.1.0. TrueNAS 12 is still running OpenZFS 2.0.0.
Yes, this is true. OpenZFS 2.1 was released this year and should make it into TrueNAS Core/Scale at some point; maybe within the year?

I have an existing TrueNAS Core system that will get transplanted into the new chassis for now. However, for planning purposes I'm trying to verify that my understanding of dRAID expansion is valid.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Afaik ZFS cannot expand draid vdevs, and the upcoming raidz expansion feature could be adapted to also work on draid, but hasn’t been.

With current tech, your best bet is multiple raidz3 vdevs, one per year. The drawback is the longer rebuild time compared to draid.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
When operating such a large pool, it is safer to rely only on mature software and setup. So let's draid maturing for few years before considering using it for production.

With a JBOD with 45 slots, I will consider 6x 7-wide raidz2 + 3 hot spares, because 11-wide raidz3 will take days to resilver.
 

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
Afaik ZFS cannot expand draid vdevs, and the upcoming raidz expansion feature could be adapted to also work on draid, but hasn’t been.

With current tech, your best bet is multiple raidz3 vdevs, one per year. The drawback is the longer rebuild time compared to draid.
I believe OpenZFS does/can/will support it, but obviously requires integration with TrueNAS, which I assume will happen at some point. Here's one such blog post from a storage expert regarding the capability: https://barrywhytestorage.blog/2020/03/25/draid-expansion-the-important-details/
 
Last edited:

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
When operating such a large pool, it is safer to rely only on mature software and setup. So let's draid maturing for few years before considering using it for production.

With a JBOD with 45 slots, I will consider 6x 7-wide raidz2 + 3 hot spares, because 11-wide raidz3 will take days to resilver.
This is absolutely true and is exactly why my current plan is to transplant my existing array guts into the 45drive chassis. I only have a 12 disk array for now, but had reached the limits of my existing chassis. I wanted to explore an understand my future options and timelines are flexible. I did put a yearly expansion in my plan above, but that was notional; primarily based on the idea that I probably want to build out the array before any disk is out of warranty.

My current array is an 11-wide RAIDZ3 with hot spare, so that's typically my starting point. Not the fastest config. The long rebuild times are the primary reason I was looking into dRAID. As noted by Yorick above, RAIDZ3 expansion is also coming and perhaps your suggested 6x 7-wide w/3x hot spare would work until dRAID matures.
 
Last edited:

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
After more dRAID reading, it seems that I'm still stuck thinking about it conceptually as RAIDZ3. Unlike RAIDZ3, you don't need to expand by a stripe width (e.g. the 11-wide disks in my original post). You can currently expand by 1 to 12 disks regardless of your stripe width; although you can't change it. There's some interesting grid graphics out there.

It seems the recommendation is to go as wide as possible, which currently is 14 disks + parity + spares, with triple parity being the current limit. That's a max of 14d+3p stripe width, but as noted, dRAID can use this even though 17 doesn't "fit into" 45 drives nicely. Cool feature. The recommendation is also to have at least 1 spare per 24 drives and ideal array size is up to 64.

So, if and when dRAID is a reliably option on TrueNAS Scale, it seems the ideal config would be a 14d+3p, expanded by any number of drives from 1 to 12, with at least 2-3 spares. Also, while the resilvering is fast, the drive replacement copy back to restore full redundancy is in fact limited by a single disk.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@LuxTerra - dRAID spare disks are actively in use. It's just that the capacity is reserved across all disks. So dRAID spares can't be shared between dRAID vDevs. Each dRAID vDev would have it's own parity disks, and spare disk(s).

I don't know about dRAID expansion... other than adding a new dRAID vDev.
 

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
@LuxTerra - dRAID spare disks are actively in use. It's just that the capacity is reserved across all disks. So dRAID spares can't be shared between dRAID vDevs. Each dRAID vDev would have it's own parity disks, and spare disk(s).

I don't know about dRAID expansion... other than adding a new dRAID vDev.
Correct, but expansion is not creating a new vdev; if I understand correctly. That's why I mentioned that I think I was still thinking of dRAID as merely a diagonal parity ZRAID3. For example, the author I linked previously (https://barrywhytestorage.blog/2020/03/25/draid-expansion-the-important-details/) answers this question in the comments section (paraphrasing here):

A 16 disk array with a stripe width of 8 disks (if I understand, it could be 7d+1p or 6d+2p or 5d+2p+1s, etc.) can be expanded by a single disk, to a total of 17 disks, and it reflows the data just fine. This is obviously something that ZRAID1/2/3 cannot do. Of course, this assume the author is correct and that I haven't misunderstood anything.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@LuxTerra - Thank you for the link. I was not aware of the dRAID expansion. It seems quite well thought out.

Edit:
Looking at the 3 articles:

Not one of them mentions ZFS. I am guessing that this is a separate product, (but similar in nature), to the proposed dRAID for ZFS. Perhaps even the original that dRAID for ZFS is based on. From the articles, it appears to be some SAN attached storage using dRAID, (but not ZFS), as the underlying data protection method.

So, any thought of dRAID expansion has to examine the ZFS version of dRAID.
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Not one of them mentions ZFS. I am guessing that this is a separate product, (but similar in nature), to the proposed dRAID for ZFS. Perhaps even the original that dRAID for ZFS is based on. From the articles, it appears to be some SAN attached storage using dRAID, (but not ZFS), as the underlying data protection method.

So, any thought of dRAID expansion has to examine the ZFS version of dRAID.

Indeed, those 3 articles are for DRAID on the IBM virtual storage platform, which IBM calls Spectrum Virtualize, and is a completely different technology from ZFS. This is just a case of acronym collision, which may have led to the confusion. Note, for ZFS, the proper term is dRAID (note the lower-case d).
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
This. dRAID on ZFS and whatever IBM does are different beasts. Afaik you can’t expand ZFS dRAID. That capability may come in future, but doesn’t exist now.
 

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
Indeed, those 3 articles are for DRAID on the IBM virtual storage platform, which IBM calls Spectrum Virtualize, and is a completely different technology from ZFS. This is just a case of acronym collision, which may have led to the confusion. Note, for ZFS, the proper term is dRAID (note the lower-case d).
Ah, yes! That makes sense. I was indeed confused and thought they were the same thing.
 

LuxTerra

Dabbler
Joined
Dec 11, 2016
Messages
17
This. dRAID on ZFS and whatever IBM does are different beasts. Afaik you can’t expand ZFS dRAID. That capability may come in future, but doesn’t exist now.
So, under ZFS dRAID I'd need a full 45 drives for my chassis and fix the geometry at something like dRAID3:11d:1s:45c or use a traditional arrangement like RAIDZ2 7-wide plus spares (up to 6 vdev in my pool and up to 3 spares for a total of 45), but then I could build them out in sets of 7 drives, spares notwithstanding.

Thanks.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Yes. What you lose is the rapid rebuild, what you gain is the ability to grow more slowly both by adding vdevs and by replacing the drives in a vdev with larger-capacity ones one by one.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Am I wrong in believing you can use dRAID now? (But that it requires setup via CLI only, as this article says it "requires more testing before it's integrated in to the Web / GUI interface." And continues to imply the current TrueNAS iterations already use OpenZFS 2.1 ... and OpenZFS 3.0 WILL feature expandable vDevs in the first half of 2022..?

www.truenas.com/community/threads/openzfs-3-0-introduced-at-developer-summit.96943/


I'm still trying to determine what gen CPU / MB combination will either support 12 NVMe devices (PCIe Lanes) with adequate thread-performance to saturate either an SFP28 NIC (with my R730xd) and if that's impractical (12 NVMe) then perhaps a configuration of

- 4 NVMe
- 1 Radian RMS-200

ZIL & MetaData 'Fusion Pool vDevs' supporting either a 7200rpm 2.5" or 3.5" vDev....
But, I know better than to assume things are as they'd intuitively (or just 'hopefully' seem). And since there are no benchmarked examples that are within my budget comprising of a few Special vDevs (Fusion Pool) showing the min HW specs to saturate SFP28 ... and doing so (at least under FreeBSD) required both a Xeon CPU with adequate PCIe Lanes and the 'per thread' Clock speed to avoid being a bottleneck ... which, apparently, also varies substantially between FreeBSD (TN core) & Linux (TN scale).

Such a thread or info would be helpful, please LMK. Thanks
 
Top