This is one of those cases where you can just blindly enable the draid feature flag, because it won’t actually be active until you have a DRAID vdev, which you probably won’t.
Given that ZFS is designed for the enterprise market, why does this surprise you? But I'm wondering what "decision" of Eric's you're referring to; the post you quote is simply stating that it's unlikely @longtom will have a dRAID vdev.But devs worked on this feature for years, only for it to be relevant to a miniscule % of users who have 90 (or 70, or 50) ... HDs ..? Really?)
And Eric, I asking you you're a smart and knowledgeable guy. I'm sure you have a reason driving your decision, bc I've been reading your answers for years, & you have reliably good logic. You're just not always in the mood to reveal your logic with an explanation .... and instead report your conclusion.
If you're reading in Eric's post a recommendation against using dRAID
Allow me to clarify my original meaning:This is one of those cases where you can just blindly enable the draid feature flag, because it won’t actually be active until you have a DRAID vdev, which you probably won’t.
IOPS should be comparable to similar RAIDZ setups, taking into account that a single dRAID vdev can take the place of multiple RAIDZ vdevs. Conceptually anyway. Reliability should also be on par with RAIDZ, apart from bugs and the like. It should also have zero impact on special vdevs.Knowing any 'cons' would make it easier to get over my fascination.
Are IOPs slower?
Are they less reliable in some way?
Incompatible with Special vDevs..?
Because dRAID doesn't make sense for a single vdev. dRAID is useful for many vdevs—typically enterprise setups with 50+ drives on a big SAS backplane. Home users are unlikely to have enough drives for dRAID.Why like it, but rather, why wouldn't everyone like & use it?
(To the extinction of RAIDz?)
How do you come to the conclusion that only a fraction of ZFS users has that number of HDDs?But devs worked on this feature for years, only for it to be relevant to a miniscule % of users who have 90 (or 70, or 50) ... HDs ..? Really?)
How do you come to the conclusion that only a fraction of ZFS users has that number of HDDs?
In addition, for the context of this topic, we need to distinguish paying enterprise customers and hobbyists, who are basically free riders here.
All in all, 90 disks is not really large for an enterprise deployment. Of course, no playground anymore, but nothing exceptional either. You can easily find cases that put this amount of disks into 4U. And with cases of 60 disks/4U you end up with 600 HDDs in a full-size rack.
I probably would have, particularly if you consider that many citizens are under driving age, and in larger cities, it's fairly common for a family to not have a car at all. But WTF does that have to do with anything?Would you have guessed that US citizens owned an average number of cars below 1..?
Because dRAID doesn't make sense for a single vdev. dRAID is useful for many vdevs—typically enterprise setups with 50+ drives on a big SAS backplane. Home users are unlikely to have enough drives for dRAID.
Which is fine. Most home users do not need a L2ARC or a SLOG either. And ZFS gets a lot easier once one understands that its many features operate on a "need-to-know-basis".
I mean, people paid developers to implement dRAID. If that's not a statement to the effect of "we really want to use this", I don't know what is.
I probably would have, particularly if you consider that many citizens are under driving age, and in larger cities, it's fairly common for a family to not have a car at all. But WTF does that have to do with anything?
Your incredulity aside, the fact remains that ZFS was developed for enterprise use, so it shouldn't be surprising that at least some of the features being developed (and recently released) are going to be applicable to applications with large drive counts. But if you want to be the guinea pig with your data, and you're fine managing it at the CLI, there's nothing stopping you.
I don't recommend dRAID, but I also don't recommend against it. To be honest, I'm not 100% on some of the details I'd need to understand in order to recommend it.
Allow me to clarify my original meaning:
dRAID is not the sort of thing you'd just throw in with older vdevs. It just doesn't make sense, other than for testing edge cases. You'd use it as a replacement for (multiple) RAIDZ vdevs.
IOPS should be comparable to similar RAIDZ setups, taking into account that a single dRAID vdev can take the place of multiple RAIDZ vdevs. Conceptually anyway. Reliability should also be on par with RAIDZ, apart from bugs and the like. It should also have zero impact on special vdevs.
The primary compromise in dRAID's design versus RAIDZ is that it deals poorly - worse than RAIDZ - with small blocks. The realistic expectations are something that will have to be built up as it gets more use.
Given that ZFS is designed for the enterprise market, why does this surprise you? But I'm wondering what "decision" of Eric's you're referring to; the post you quote is simply stating that it's unlikely @longtom will have a dRAID vdev.
If you're reading in Eric's post a recommendation against using dRAID, I don't see it, but I'd think a perfectly valid reason for such a recommendation would be "TrueNAS doesn't yet support it."
Since iXSystem does not even describe dRAID in its documentation at this stage, I'll make do withPolitely, this is the same issue which prompted my question.
This says "what" but doesn't describe "why." I've yet to see a reason.
Do you believe things in the absence of reasons?
Can you even support your own statement with evidence or public statements..?
(I.e., from iXsystems?)
The dRAID offers a solution for large arrays, vdevs with fewer than 20 spindles will have limited benefits from the new option. The performance and resilver result will be similar to RAIDZ for small numbers of spindles.