12 disk vdev

Joined
May 25, 2017
Messages
4
I have a backup server with 24 3TB 7200RPM disks. Its only job is to store ZFS snapshots from the main Freenas server, so I'd like to configure it as 2 12-drive Z2 vdevs to maximize space. Since it will only have one client, IOPS don't seem to be an issue (correct me if I'm wrong).

I know the rule of thumb is 10 drives max. per Z2 vdev, but is anybody clear on why? I don't want to risk instability, but if the only issue is speed then it's worth a try.

Other hardware:
Supermicro SC846 chassis with direct-attached backplane
3x LSI 9207-8i HBA
128GB ECC RAM
2x Xeon E5-2609
Chelsio 10Gb NIC

Thanks.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
I know the rule of thumb is 10 drives max. per Z2 vdev, but is anybody clear on why?
RAIDZ vdevs offer roughly the same IOPS as a single disk. As the vdev gets wider, this becomes a very large constraint.
 
Joined
May 25, 2017
Messages
4
Thanks guys.

So if i'm understanding correctly, the only reason to avoid wide RAIDZx vdevs is low IOPS. Which makes sense, especially if you're used to hardware RAID6 where more spindles = more IOPS. But if a 20-disk RAID Z2 has the same IOPS as a single disk and a 6-drive RAID Z2 has the same IOPS as a single disk, then does that mean the arrays are functionally the same speed, or is there some IOPS overhead that gets worse the more disks you have?

In other words, if the IOPS available to the client are the same for a small and large RAIDZ2 vdev, the only penalty for going larger is psychological -- we expect a 20 disk array to be fast.

But that leaves the door open for rarer use cases like cold storage, where the extra IOPS of multiple vdevs might not be useful at all, but the extra storage space of extra-wide (12-disk plus) vdevs might be very useful indeed.

Obviously a 60-disk RAID Z3 would be asking for trouble data-to-parity-wise, but I wonder what a reasonable max width for "slow" vdevs might be.
 

Vito Reiter

Wise in the Ways of Science
Joined
Jan 18, 2017
Messages
232
Using more than 12 disks per vdev is not recommended. The recommended number of disks per vdev is between 3 and 9. If you have more disks, use multiple vdevs.

I know it doesn't say causes instability but I'm pretty certain I've read elsewhere that 11 is the ceiling and anything above that (12+) would be unstable. Not 100% sure, just my input.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I have a backup server with 24 3TB 7200RPM disks. Its only job is to store ZFS snapshots from the main Freenas server, so I'd like to configure it as 2 12-drive Z2 vdevs to maximize space. Since it will only have one client, IOPS don't seem to be an issue (correct me if I'm wrong).

I know the rule of thumb is 10 drives max. per Z2 vdev, but is anybody clear on why? I don't want to risk instability, but if the only issue is speed then it's worth a try.

Other hardware:
Supermicro SC846 chassis with direct-attached backplane
3x LSI 9207-8i HBA
128GB ECC RAM
2x Xeon E5-2609
Chelsio 10Gb NIC

Thanks.
I would be comfortable using 2 x 12-disk RAIDZ2 vdevs in your use-case. A reasonable alternative would be 3 x 8-disk RAIDZ2 vdevs.

One of the pitfalls of 'wide' vdevs no one has mentioned yet is resilvering: the more drives there are in a vdev, the longer it takes it to resilver when/if you have to replace a failed disk.
 
Joined
May 25, 2017
Messages
4
Spearfoot, that is an excellent point. So much for my 60-disk RAIDZ3. I imagine the rebuild time on that would be horrifying.

3 x 8-disk is definitely my fallback position. I'll try out the 2 x 12-disk RAIDZ2, fill it up to 80% and thrash it a bit, try a resilver, and report back.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So let me clear up some confusion. 12+ disk vdevs are totally "okay" from a ZFS coding standpoint. There's nothing unstable about that (with some exceptions, see below).

Obviously, the more disks you have in a vdev, the more prone you are to multiple simultaneous disk failures in a vdev, which translates to a higher chance of losing the entire zpool. This could be classified as "unstable".

Your iops will be limited to that of 2 disks since you have 2 vdevs. The reason for not recommending large width is because of performance (iops). I understand this is for backup storage only, but even in these cases I don't recommend it. The reason is several fold:

1. Over time, your zpool will fragment more and more. This is the nature of ZFS and since there's no defrag option, you can't fix this at all. This *will* translate to slower and slower performance of your zpool over time.
2. Over time, your zpool will have to work even harder to integrate snapshots. I've worked with people who would send a snapshot, and then it would take an hour or more to integrate a snapshot that was just a few 100GB of data.
3. I've seen people try to do zfs send back to the main system for data recovery purposes, and then they realize that a 10+ day return trip is totally unrealistic and they have no choice but to wait 10+ days for the data to replicate back to the original (production) server.
4. Some of the ways that ZFS is tuned by default has been tested by the FreeBSD community, and virtually nobody tests ultra-wide vdevs like yours. So you can also run into other bottlenecks over time as well, and you're likely to be alone with trying to figure them out (assuming you even can figure them out). Tuning ZFS is crazy hard (even I avoid it whenever possible).

So I hear you about the "iops don't matter to me". If this is for home use and you are okay with simply not having access to your movies for a week or two, go for it. If this is a production server where a 10+ day recovery is totally not an option, then I'd strongly recommend you reconsider vdevs that wide.

I've worked with lots of people that say "iops don't matter" and 3 months later they're asking why things are slow, and when I explain it they realize that they do actually have a lower threshold. I promise you that if iops and fragmentation get badly enough, you will eventually be unhappy with it. ;)

I personally ran a 16 disk vdev for home use just to "see what would happen". Worked great for a few months. No problems at first and I thought that "the man" was holding us back by telling these fables about how much wide vdevs suck. But after a few months it was hard to stream a video and even do things like move a directory without bottlenecking. I was an only user and had these problems.

So my general rule is "just don't do it". Not to sound like a jerk, but in my experience, when people want to go really wide and think that iops won't matter for their workload, they probably aren't realizing how bad it can get. I can promise you won't be the first to think this will be fine, and you probably won't be the last I'll hear from this month either.

I thought the same thing you do... iops don't matter to me. But they eventually can bite you, and once it's bitten you, there is no fix except to destroy the zpool and start over (this is a very time consuming and generally really crappy waste of time). I eventually had to trash that zpool and recreate it because it was totally nonfunctional for me as a single user for home use.

tl;dr: I never, ever recommend wide vdevs. ;)

PS: In my experience, if this is something like a backup for a production server, I'd probably go no wider than about 8 disk RAIDZ2 for performance reasons unless you are totally not concerned about extremely long data restoration times. ;)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
I'd go for a slightly asymetrical approach, if the server only had 24 slots. The slight asymetrical configuration won't cause any problems.

12 x disk RAID-Z2
11 x disk RAID-Z2

This would allow hot replacement. Meaning, if a failed disk in either vDev is still sort of working, you can install the replacement disk in the free slot and cause a less impacting disk replacement. ZFS will read what it can from the failing disk, and everything else from the appropriate vDev. This can help reduce the impact of disk replacements for wider vDevs.

As I said, this would be my choice.
 
Joined
May 25, 2017
Messages
4
Cyberjock -- Yes! That is exactly the explanation I was hoping for. Thanks for taking the time.

I've been following the "just don't do it" and "just do it" FreeNAS rules for years with good results, but after a while it's nice to know why a rule exists. I looked through many posts that cited the 11-disk limit, but none that went much deeper than "because iops".

This is a backup for a production server, and a 10-day recovery would be a very bad thing. So yeah, 3x8 it is, or maybe even 4x6. Or I might just bite the bullet on a 36-drive chassis and get speed and capacity.

Hopefully this will be useful to the other "why do I need iops?" people out there.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
You should check out this raidz calculator

http://wintelguy.com/raidcalc.pl

As the vdevs get wider a lot of extra padding/parity can be consumed. From my tests, 9-way is quite efficient but wider vdevs lose a surprising amount to padding.
 

AVB

Contributor
Joined
Apr 29, 2012
Messages
174
Absolutely great information and timing. I was thinking about doing a 12 drive vdev for my backup server also to gain a little extra space but now that idea is out the window and I might do a total rebuild with two 9 drive Z2 vdevs in one pool instead of a 12 drive and a 6 drive pool that I was planning.
 

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
195
Hello all

I only use the freenas server to deliver music and videos to my other pcs. I am for the most part the only user. I recently had to reconfigure my Freenas server because of my ICybox failed. I had 8 2tb disk in a single pool and I used about 3.6TB of data. I copied regularly my data to an a Buffalonas that is filling up rapidly and a copy on my HTPC on a 4tb disk.

I purchased a 2U server with 8 bays (to act as my Primary nas). I will change the drives on my old server to 4 x 3TB. I will then populate the new server with 8 2TB wd red

I do not want to be without my music or video for 10 days while my nas is rebuilt.

Should i continue with the 8 drives in a single vdev in Z2 or will i have more space with 2 4 disk vdevs each in Z1 ( I have many 2tb wd Red)

Thank you for your suggestion
 

AVB

Contributor
Joined
Apr 29, 2012
Messages
174
Hello all

I only use the freenas server to deliver music and videos to my other pcs. I am for the most part the only user. I recently had to reconfigure my Freenas server because of my ICybox failed. I had 8 2tb disk in a single pool and I used about 3.6TB of data. I copied regularly my data to an a Buffalonas that is filling up rapidly and a copy on my HTPC on a 4tb disk.

I purchased a 2U server with 8 bays (to act as my Primary nas). I will change the drives on my old server to 4 x 3TB. I will then populate the new server with 8 2TB wd red

I do not want to be without my music or video for 10 days while my nas is rebuilt.

Should i continue with the 8 drives in a single vdev in Z2 or will i have more space with 2 4 disk vdevs each in Z1 ( I have many 2tb wd Red)

Thank you for your suggestion
Your amount of usable space of about 10.4TB will be the same with an 8 drive Z2 pool or 2 4drive Z1 vdevs in the pool. However, the Z2 pool would provide better data security since you could drop 2 drives before loosing data vs 1 drive in the 4x2 scenario. Have you given any thought to used Enterprise drives. 3TB Hitachis that pass all tests can be had for about $30 each. That would increase your pool size by 50%. (Edit - I just noticed that you are in the EU, I don't know what shipping costs would be but they still would be signifigently less expensive that new.) While they may have lots of hours they usually have few power cycles. I got one last year that had 26,000 hours but only 5 power cycles. Just something to think about.
 

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
195
Many thanks for your guidance. I was under the impression that WD red or Ironwolf were enterprise quality dives. I am not familiar with the Hitachi drives. I will look into this. I see many Freenas users who have HGST drives. Now Thant HGST has been bought and fully integrated with WD and phased out in 2018. Are these older dives and does it now make a difference if they are called WD or Seagate since it is all 1 company?

I will not need to migrate to 3tb (8*3) because I know I will not add to my music probably add 1TB of TV and 1TB of movies during the rest of my life hence i believe I will have plenty of space. Once again thank you for your guidance
 

AVB

Contributor
Joined
Apr 29, 2012
Messages
174
Many thanks for your guidance. I was under the impression that WD red or Ironwolf were enterprise quality dives. I am not familiar with the Hitachi drives. I will look into this. I see many Freenas users who have HGST drives. Now Thant HGST has been bought and fully integrated with WD and phased out in 2018. Are these older dives and does it now make a difference if they are called WD or Seagate since it is all 1 company?

I will not need to migrate to 3tb (8*3) because I know I will not add to my music probably add 1TB of TV and 1TB of movies during the rest of my life hence i believe I will have plenty of space. Once again thank you for your guidance

My understanding is that WD Red and Seagate Ironwolf are NAS quality drives. Enterprise would be the next step up from them. WD and Seagate are 2 separate companies, which you think is better is up to you. As for space, I was in your position a few years ago and now my Media server has 24TB of movies on it ( I really don't shrink them much if at all). You can never have too much free space IMO since a Blu-ray can take 30GB and a 4K Blu-ray can get to 100GB just for one movie. I've been avoiding 4K movies but after the next upgrade (server and TV) I'll have the space to seriously consider them.
 

William Bravin

Contributor
Joined
Mar 16, 2016
Messages
195
HI, thank you for your thoughts. I guess I am much older than you and i do not have much long to go. I have very few blue ray and no 4k movies. The difference in quality it. I will be adding a 65 in TV next year and it will probably be 4k, because that is all there is. However, so far I am not able to appreciate it the difference in quality. what i saw in the stores is not that much better that what i currently have. I even brought on 2 usb with on them a blue ray movie on one and one of my regular movies on thew other. Yes there is a difference however I will not be prepared to make the required investment for that jump in quality for video. My investment is mostly on the audio side. There I have spent a few bucks because i like to hear the details in the music. Most of my music is organised in playlist to accomodate my moods or what i am doing I have 72 of them. The music is always on in my house

I did investigate the HGST drives and will be buying those as my next purchase. In europe these drives are about 120 euros. and i am prepared to invest in these drives.

Again many thanks for your opinion it made me think on the direction i should take the development of this environment.
 
Top