dRAIDs here (for ZoL) ... but when will it arrive on TrueNAS..? And, re: Special vDevs...

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Well, dRAID (distributed RAID) is here (in ZoL 2.0.0 and newer iterations) ... and I have heard it'll be in TrueNAS soon, also.

Anyone have any idea how soon it might hit a TrueNAS Scale update..?
If it's less than a month away I'll postpone making my pool to format it as a dRAID.

(Think it'll eventually make it's way down to TrueNAS Core, too?)

I really CANNOT WAIT to try out Special Allocation Classes as well.
I just wish they'd backup the special allocation classes to the main pool bc it's a LOT more expensive per Parity OPTANE than spinning drives! lol.
And recovering drives that aren't helium is reasonable. In fact, it's pretty inexpensive and likely if it's not a dropped drive or bad heads.

But recovering flash !?? TRUST ME. That is a different story. You have to be able to disable the Garbage Routine and TRIM (otherwise while it's powered on it'll just be corrupting data ... so it HAS to go in to Technology mode, and you're talking THOUSANDS -- if not over $13k ... to get a good SSD recovery machine (which I'll hopefully get soon ... but EVEN THEN, it's still lower-likelihood than spinning drives.

Thus, I wish there were a way to make the pool backup the Special vDevs to the array to mitigate how bad it could be ... when the array's not otherwise "busy" ...
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
dRAID will be more than 6 months away... in the meantime, people can try it out via the CLI.
Special allocation classes with a mirror SSD will be reliable.... just make sure they are power safe. No reports of problems.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The release you list, 2.0.0, does not support DRAID. Per the below offical OpenZFS document, DRAID is supported in 2.1.1:

OpenZFS - Feature Flags

I verified that with my Linux desktop, (not running TrueNAS core or SCALE):

Code:
> uname -a
Linux arwen 5.4.109-gentoo.1 #1 SMP PREEMPT Sun Apr 18 11:24:01 EDT 2021 x86_64 AMD Ryzen 5 2400G with Radeon Vega Graphics AuthenticAMD GNU/Linux

> zpool --version
zfs-2.0.5-r1-gentoo
zfs-kmod-2.0.5-r0-gentoo

> zpool get all | grep -i raid
>
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
The release you list, 2.0.0, does not support DRAID. Per the below offical OpenZFS document, DRAID is supported in 2.1.1:

OpenZFS - Feature Flags

I verified that with my Linux desktop, (not running TrueNAS core or SCALE):

Code:
> uname -a
Linux arwen 5.4.109-gentoo.1 #1 SMP PREEMPT Sun Apr 18 11:24:01 EDT 2021 x86_64 AMD Ryzen 5 2400G with Radeon Vega Graphics AuthenticAMD GNU/Linux

> zpool --version
zfs-2.0.5-r1-gentoo
zfs-kmod-2.0.5-r0-gentoo

> zpool get all | grep -i raid
>


Okay, good predictions. I think my version should at least provide limited support for it.
I'm Running TrueNAS (core) 13.0-U2

I'm guessing it's only available in the CLI
and not until it's in the GUI is it "blessed" as being reliable per iX ..

But the real question ... is this really only something for big arrays?

Requiring I only reserve space causes my intuition to think that I'd get a greater percent of my hardwares IOPs + bandwidth for data (and of course, faster resilver times). If the reliability is on par with RAIDz, when ready enough for home labs ... why wouldn't everyone use this?

Thanks! (truly)

TrueNAS OpenZFS ver.png
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Just because ZFS internally supports dRAID, does not mean you want to use it on TrueNAS Core or SCALE. In fact, doing so might make the NAS server less reliable than a straight up instance of FreeBSD or Linux with ZFS & dRAID. Remember, until the GUI supports dRAID, it's likely all pool AND disk related web pages & functions will not work. (Or work wonky.) So the disk fail E-Mails may never come, even if you have 3 failed disks over months.

The faster re-silver times are only for the disk failure. On replacement, it's just as slow as a regular RAID-Zx. Plus, I don't know if dRAID supports replace in place like Mirror & RAID-Zx. (And for that mater even a simple single disk vDev can be replaced in place.)


As for why everyone would not be using dRAID, it's simple. RAID-Zx has different features. Like using a smaller vDev size. That would allow replacing all the disks to grow that vDev. Or adding another smaller vDev to get both more space, and more IOPS. Using dRAID would likely require more disks, & integrated spares to be useful.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Just because ZFS internally supports dRAID, does not mean you want to use it on TrueNAS Core or SCALE. In fact, doing so might make the NAS server less reliable than a straight up instance of FreeBSD or Linux with ZFS & dRAID. Remember, until the GUI supports dRAID, it's likely all pool AND disk related web pages & functions will not work. (Or work wonky.) So the disk fail E-Mails may never come, even if you have 3 failed disks over months.

The faster re-silver times are only for the disk failure. On replacement, it's just as slow as a regular RAID-Zx. Plus, I don't know if dRAID supports replace in place like Mirror & RAID-Zx. (And for that mater even a simple single disk vDev can be replaced in place.)


As for why everyone would not be using dRAID, it's simple. RAID-Zx has different features. Like using a smaller vDev size. That would allow replacing all the disks to grow that vDev. Or adding another smaller vDev to get both more space, and more IOPS. Using dRAID would likely require more disks, & integrated spares to be useful.

Of course I know & agree (it's mentioned in my response you replied to) that no feature is supported officially until it's in the GUI.

On another thread discussing dRAID I asked more in depth points about the possible pros + cons:

- Does it supports special vDevs?
- What's the performance of dRAID vs a Striped set of equal drives?
(seems the best way to quantify theoretical max perf. vs parity's overhead, is RAID-0)


Our dRAID-opinions reflect our optimism or pessimism (realism) to new apps / OS / features ... in which I assume it'd of course have all the features of RAIDz, and others are more skeptical and realistic & know better to not count on any feature being on par with RAIDz until you've tested it and seen that it's reliable.

But, if all it did was maintain all features of RAIDz and offer faster rebuild times, to me, it would be completely superior.

Regarding reliability, as it's supposedly intended for "very large volumes of 90+ HD" seems to say it HAS to be reliable, bc if it's not reliable at 90+ HD is worthless, and if it is, then reliability would seem to be proudly demonstrated by the developers by saying it supports such large volumes.

Ultimately, only once truly tested en vivo and after performance, reliability & having identified any and all weaknesses ... should any of us believe we know what dRAID means.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Just because ZFS internally supports dRAID, does not mean you want to use it on TrueNAS Core or SCALE. In fact, doing so might make the NAS server less reliable than a straight up instance of FreeBSD or Linux with ZFS & dRAID. Remember, until the GUI supports dRAID, it's likely all pool AND disk related web pages & functions will not work. (Or work wonky.) So the disk fail E-Mails may never come, even if you have 3 failed disks over months.

The faster re-silver times are only for the disk failure. On replacement, it's just as slow as a regular RAID-Zx. Plus, I don't know if dRAID supports replace in place like Mirror & RAID-Zx. (And for that mater even a simple single disk vDev can be replaced in place.)


As for why everyone would not be using dRAID, it's simple. RAID-Zx has different features. Like using a smaller vDev size. That would allow replacing all the disks to grow that vDev. Or adding another smaller vDev to get both more space, and more IOPS. Using dRAID would likely require more disks, & integrated spares to be useful.

Agreed (as I mentioned earlier):
Absence of major features in the GUI is indicative as to whether it's officially supported.

As I was discussing dRAID in another TrueNAS thread and received a good/clear answer, I'm going to provide a summary of my questions and the answer below:


Being "geared for 90+ HD arrays" suggests it's reliable.
If it retains all RAIDz features (special vDevs, nested Devs, etc) of RAIDz.
And rebuilds failed devices (degraded arrays) substantially faster...

With known "pros" & absent articulated "cons," it ceases to be "why like it" & becomes:
Why wouldn't everyone like and use dRAID – to the extinction of RAIDz..?

Granted, nothing can be "known" without testing it.
Only then can we 'know' performance & reliability, & identify | remedy all bugs.


The only answer that's satisfied my curiosity to the extent I think possible to (beyond that of a developer or someone using it on ZoL was provided by EricLowe in another TrueNAS Forums thread, which I'm summarizing here ... as my curiosity is satisfied until dRAID is actually out from which actual opinions predicated on actual experience can allows more elaborate answers than those that're practical to seek now ... which relegated to theory.



Summary of Ericloewe's answer:
Available here: https://bit.ly/3RH3u8B

dRAID is not the sort of thing you'd mix with older vdevs.
And only makes sense for testing edge cases.

You'd use it to replace (multiple) RAIDz vDevs.

IOPS should be comparable to similar RAIDz setups, taking into account that a single dRAID vDev can take the place of multiple RAIDz vDevs (conceptually at least).

Reliability should also be on par with RAIDz (excluding bugs etc).
And, it should also have no impact on Special vDevs.

The primary compromise in dRAID's design vs RAIDz is reduced small-block performance than RAIDz. Our expectations & confidence in its reliability will scale w real use over time.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I too would like to see what comes with dRAID. I've had production systems running Dynamic Disk Pools on SANtricity for a long time. It's a really nice and easy to use feature. Having to manage your scalability by adding VDEVs can be cumbersome.

While ZFS dRAID is not exactly the same, and has it's own limitations due to how it works in comparison with the above,

It's certainly a fantastic step in the right direction and I am excited to see more and do some testing of my own.
 
Top