Guidance on Media storage setup

Status
Not open for further replies.

bigzaj

Explorer
Joined
Jan 6, 2016
Messages
95
I am testing a new server using LSI 9211-8i flashed in IT mode in a 24 bay Supermicro case. Replacing a WHS server with 2x 10 disk raid 6 arrays.

Planned setup:
1x Volume -> 2x vdev RaidZ2 (8x6tb + 8x3tb) -> 1x Dataset called "Media" (assume 8 drives for raidz2 ok?)
In this dataset I will put movies, tv shows, etc.

Questions:
1. With 2 RaidZ2 vdevs in 1 volume, will it stripe across both vdevs? If so does this mean that they need to be same capacity?
2. I get that I have 2 drive fault tolerance per vdev, but if I drop 3 drives on 1 vdev I will lose the entire volume correct? I'm confused about the benefit of two vdevs in one volume.
3. Can I increase capacity of the 8x3tb by replacing drives one by one?
4. Is there a risk of extending with a 3rd vdev on the same volume in the future? Goal is to have 100TB.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
1. With 2 RaidZ2 vdevs in 1 volume, will it stripe across both vdevs? If so does this mean that they need to be same capacity?
Yes. No.

2. I get that I have 2 drive fault tolerance per vdev, but if I drop 3 drives on 1 vdev I will lose the entire volume correct? I'm confused about the benefit of two vdevs in one volume.
Yes. Higher IOPS.

3. Can I increase capacity of the 8x3tb by replacing drives one by one?
Capacity won't increase in the vdev until you replace all drives in the vdev with larger ones. the smallest drive in the vdev dictates the capacity of the vdev.

4. Is there a risk of extending with a 3rd vdev on the same volume in the future? Goal is to have 100TB.
Nope.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Just further commentary on point #2, regarding the vdev sizes/number.

The most you can do is RAID-Z3 in ZFS. You want to aim for something like 30'ish% of your drives are redundant. Something below 25% is definitely "too low", and something 40% or higher is probably awfully high, unless you have a reason for that. In other words, a RAID-Z with 5 drives would only offer 20%, and thus is too "wide" of a vdev for RAID-Z. With RAID-Z2, you'd be at 40% redundancy, which is kind of high. And so on. Thus, if you were, for some reason, wanting to put 15+ drives in a single (presumably RAID-Z3) vdev, you'd have less than 20% redundancy (3 drives out of 15 or more), which is too wide of a vdev, and does not offer what we would consider to be acceptable resilience. So when you start talking about 15, 20, 25, drives, there's just no way in ZFS to reasonably plunk those into a single vdev anyway...in most any case. So, as the other gentleman says, there is some effect on IOPS, but, when you start talking about 15 or 30 drives, a single vdev for them is already a bad idea just on the redundancy equation, see. So, there is no reasonable configuration for your number of drives (even if they were the same capacity throughout) into a single vdev. 12 drives, I think, is a reasonable maximum for any vdev.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
@DrKK I might have been mistaken but was the OP asking about adding additional vdevs to a pool and if there was a limit to how many vdevs could be striped into a pool?

Also out of my personal curiosity, do you have any articles or discussion (on here even) that explain in greater detail why 30% is the "sweet spot". Is this a personal opinion of yours or more widely recommended? Thanks!
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I was responding to "I am confused about the benefit of having two vdevs in a single pool". I was pointing out that multiple vdev's are sometimes "necessary" just on account of the number of devices in the pool.

As to your latter question, I don't know if it's a "personal opinion" really. It's just simply the collected wisdom. A "good" size of RAID-Z is 3 or 4 disks. A "good size" for RAID-Z2 is 7, 8 disks. A "good size" for RAID-Z3 is 10, 12 disks. At least, that's what we all do, and that's what ixSystems tends to do when structuring commercial products, as far as I know. I just turned that into a mathematical rule of thumb.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
I was responding to "I am confused about the benefit of having two vdevs in a single pool". I was pointing out that multiple vdev's are sometimes "necessary" just on account of the number of devices in the pool.

Gotcha.

Thanks for the response regarding my other question. I didn't know if you were basing your assertion on what I previously thought was true about the different raidz levels "preferring" (poor choice of word) select size vdevs. I.e. Raidz2 that 8 drives were better than 7. I now know this is outdated information from FreeNAS prior to 9.2 (I might not be remembering that version number correctly).

I am also aware that vdevs larger than 12 (or 11?) are not advised and I assume that is for a different technical reason than the one you are referencing. What you said make sense, was just intrigued about the information it is based upon.

Thanks!
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Gotcha.

Thanks for the response regarding my other question. I didn't know if you were basing your assertion on what I previously thought was true about the different raidz levels "preferring" (poor choice of word) select size vdevs. I.e. Raidz2 that 8 drives were better than 7. I now know this is outdated information from FreeNAS prior to 9.2 (I might not be remembering that version number correctly).

I am also aware that vdevs larger than 12 (or 11?) are not advised and I assume that is for a different technical reason than the one you are referencing. What you said make sense, was just intrigued about the information it is based upon.

Thanks!
THere are many reasons an ultra-wide vdev is not recommended. One of the main, of which, is the fact that it does not offer enough redundancy, which is precisely what I was saying.

As for the 8 vs. 7 thing, there, you're referring to a certain idiosyncratic sweet spot that the various RAID-Zx configurations have, in terms of performance and wasted overhead. That's another issue entirely. I was not commenting on that, as that is well discussed in any primer for ZFS.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
THere are many reasons an ultra-wide vdev is not recommended. One of the main, of which, is the fact that it does not offer enough redundancy, which is precisely what I was saying.

I understand that I just didn't know vdevs >12 were not recommended purely for the low percentage (as you stated) redundancy that they provide.

I think I found the article I was referring to about "optimal" vdev configurations based on the raidz level:
https://www.delphix.com/blog/delphi...or-how-i-learned-stop-worrying-and-love-raidz

Basically it came down to a misconception of block size and people recommending vdev size based on an incorrect assumption of using (2^n)+p (where p is number of party disks). people assumed that block sizes were a power of 2 but using standard compression (lz4) the block size comes out to non-integers like 3.2KB and such. Which would eliminate the recommendation that a 7 disk vdev would not be as efficient as an 8 disk vdev for raidz2.

I think i got off on a little bit of a tangent there and read too much into what you were saying. my bad!:)

(sorry for hijacking)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
In relation to benefits of multiple vdevs:

Yes. Higher IOPS.

Also, multiple vdevs make it easier to expand your storage one vdev at a time.

Say, hypothetically you had a 12 disk raidz3 with 3tb, and another with 4Tb. You will have to replace all 12 disks in the 3tb set before you see any benefit.

BUT if you had gone with 8 way vdevs, then you'd see a benefit after replacing just 8.

For a 24 bay chassis, I'd recommend 8 way z2 vdevs if space is more important, and 6 way z2s, if iops is more important.

Unless you're planning on nfs/iscsi San setups, in which case you should be looking at mirrors.
 
Status
Not open for further replies.
Top