What number of drives are allowed in a RAIDZ config?

Tekkie

Patron
Joined
May 31, 2011
Messages
353
I've read the ZFS Best Practices Guide on RAIDZ Configuration Requirements and Recommendations but I am still confused as to what number of drives a RAIDZ array should/must have.

The wiki page has the following bit:
  • Start a single-parity RAIDZ (raidz) configuration at 3 disks (2+1)
  • Start a double-parity RAIDZ (raidz2) configuration at 5 disks (3+2)
  • Start a triple-parity RAIDZ (raidz3) configuration at 8 disks (5+3)
  • (N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 8

If I have 6 drives what RAIDZ configuration can I build? Is raidz2 (4+2) possible?
 

raulfg3

Dabbler
Joined
May 27, 2011
Messages
40
If I have 6 drives what RAIDZ configuration can I build? Is raidz2 (4+2) possible?
YES, (3+2) is the minimun number of disk.


PD: For home Use, normally a Raiz1 is a good choice in your case you can use (2+1)+(2+1) Disk <-with this you can change only 3 disk to grow your pool if you use (4+2) You need to change the 6 disk to grow.
 

Tekkie

Patron
Joined
May 31, 2011
Messages
353
Come to think of it 2 x (2+1) is really a good idea with an eye on the future thanks!

With some more thoughts on this subject, the 4+2 config is more appealing because its more secure, my old 1.5TB RAID5 configs had a single external USB drive attached as a backup which covered the 2 drives died case, however with 4TB in a single array that's no longer an option, might even go as far as saying that 5+3 is the best option at minimal extra cost.
 

torrin

Moderator
Joined
May 30, 2011
Messages
32
But if you lose two drives in the same array, you lose all of your data. I have 6x1.5's and I opted for 4x2 RAIDZ2.
 

Tekkie

Patron
Joined
May 31, 2011
Messages
353
@torrin

What are the chances of loosing 2 drives in a single array?
 

torrin

Moderator
Joined
May 30, 2011
Messages
32
The issue usually revolves around the extra work the other two drives are doing to rebuild the 3rd disk if one fails. Drive fails, replace it, other two drives work hard to rebuild data, second drive fails before the first is rebuilt. It really is a discussion on how important your data is to you and how good your backups are. :)
 

jafin

Explorer
Joined
May 30, 2011
Messages
51
Also depending on if you have 512 or 4k drives (like the Samsung F4EG) there seems to be some advise that says there are certain magic numbers for RAIDZ that work better for 4k drives.

Either way, do go RAID-Z2 and 10 disks in RAID-Z2 would be optimal for the Samsung F4. For 512-byte sector HDDs you have the advantage of more flexibility as for example a 9-disk RAID-Z2 would work just as well too, whereas the Samsung F4 and other 4K disks would prefer special combinations like:
RAID-Z: 3, 5 or 9 disks
RAID-Z2: 6 or 10 disks

Ref: http://hardforum.com/showthread.php?p=1036838865

sub.mesa wrote:
As i understand, the performance issues with 4K disks isn’t just partition alignment, but also an issue with RAID-Z’s variable stripe size.
RAID-Z basically works to spread the 128KiB recordsizie upon on its data disks. That would lead to a formula like:
128KiB / (nr_of_drives – parity_drives) = maximum (default) variable stripe size
Let’s do some examples:
3-disk RAID-Z = 128KiB / 2 = 64KiB = good
4-disk RAID-Z = 128KiB / 3 = ~43KiB = BAD!
5-disk RAID-Z = 128KiB / 4 = 32KiB = good
9-disk RAID-Z = 128KiB / 8 = 16KiB = good
4-disk RAID-Z2 = 128KiB / 2 = 64KiB = good
5-disk RAID-Z2 = 128KiB / 3 = ~43KiB = BAD!
6-disk RAID-Z2 = 128KiB / 4 = 32KiB = good
10-disk RAID-Z2 = 128KiB / 8 = 16KiB = good

Ref: http://forums.servethehome.com/showthread.php?30-4K-Green-5200-7200-...-Questions
 

Tekkie

Patron
Joined
May 31, 2011
Messages
353
Is Seagate that bad??

That's not the first time I've read about Seagate drives failing fast in RAID configs.
 

esamett

Patron
Joined
May 28, 2011
Messages
345
If it happens to you: 100%. It is reported to be not uncommon for multiple drive failures to occur within the same manufacturing lot. This is one reason I spread out the purchase of my system over a month or so. In fact, in one shipment of two drives both were DOA with the "click of death." The replacement drives I received with my RMA worked fine.

ZFS/Z2 maintains its data integrity if two drives fail. This is a substantial improvement in protection from drive failure with ZFS/Z1 - the RAID5 equivalent. No amount of drive redundancy will protect you from a catastrophic server failure such as a bad power supply or power surge frying your system. If this were to happen you might be able to retrieve your data by replacing the drive's controller. The recommendation (that I am not following at this time) is to have an actual backup of your data on another physical server. This can be done through FreeNAS with iSCSI I believe.

I started out with a 4 disk array to play with. When I found out about the ZFS limitation preventing add-hock addition of drives I decided to hold out until I could purchase my full array size. This was originally to be 8 drives with a Z2 configuration, but when I read about the ideal number of drives thing I sprung for a 10 drive array. The 512b sector size drives I am currently using are not supposed to be as sensitive to this but I wanted my array to be of the proper size in the future when I either need to replace a failed drive or upgrade my array with larger capacity disks. I am betting that future drives will be more likely to be of the advanced format 4k sector size variety. Now my case is completely full including a modification to fit the last disk. Maybe I will start actually using this beast I have created.
 

esamett

Patron
Joined
May 28, 2011
Messages
345
similar story, my failed drives were hitachi
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
Interesting reading about the 4k balance of drives. I'm a simple home user and planned to use 4 Samsung F-4 drives in a RAIDZ but now maybe it will be three in a RAIDZ and a single ZFS or UDF drive for movies (whichever functions better for streaming movies). Guess I have more to read.
 
Joined
May 27, 2011
Messages
566
to answer the original questions, raidz need 2 disks (1 data, 1 parity), raidz2 needs 3 (1 data 2 parity)

it would be stupid to do this but it is the minimum.
 

mcleanrs

Dabbler
Joined
Jun 1, 2011
Messages
10
I oversee two 8 disc Linux boxes at work, both are setup as RAID-5 -- and they're running a proprietary build of RedHat. Purchased before my time, blah blah blah, I don't like them but they're what we have. I'm also a jack of all trades and ace of none, so I'm happy to keep them running, but I'm not in a hurry to rebuild them or anything. Ok. Let me get to the point.

One box is about 5 years old, it's 8xHitachi 250 Gb hard drives, and all 8 drives are original.

The other box is about 3 years old. Running Seagate Baracuda 750 Gb drives.
Around Thanksgiving it unexpectedly kicked out two drives, failing the array. I didn't know a thing about mdadm and we didn't have a backup of our data outside the RAID (Though I had been pushing for that). About $1,000 worth of tech support later (Thank goodness it was only that, could have been a lot more -- and saved tens of thousands of dollars worth of data) our array was back online.

By Christmas our Linux RAIDs had been joined by a Synology DS1010+ plus the DX510 expansion module, and 10 2Tb drives.

By New Year's, the RAIDs had been backed up to the Synology (and I was no longer intimidated by rsync!)

So all was fine and well until about 2 weeks ago, when the same exact thing happened. 2 more of those Seagate drives died. This time I was a little more brazen when it came to throwing around mdadm commands, since I knew we had a backup, and I was able to limp the RAID along for a few days until we got some fresh drives on hand.

Lessons learned?
• The chances of losing two drives at a time will seem low until it happens to you.
• Seagate 750 gb drives are awful (4 failed! Out of 8!)
• RAID alone != backup, especially for expensive files. Get those files onto a second box!
 

ntshad0w

Cadet
Joined
Sep 29, 2012
Messages
2
Tekkie, mates,

I'm new here but I'm little older IT Pro guy, and my main work topics are based on arrays and drives, my private lab have also a lot of drives and raids...

I saw in my career hundreds of arrays, and repair them, install, configure and see how they work, so i saw a thousends of hdd's online and hundreds faulted.

So to all, from my exp I can say about RAID's:

- RAID 5 on enterprise class disks and hw is good for backups, second mirror and less importance data that have daily backup
- RAID 5 enterprise arrays with 5 disks (its common and best performance RAID5 config) crash sometimes, in 60% of time because of forgotting to change failed drive during months or even years, 20% is because of rebuilding faulted drive, 20% rest of crashes, it's happen statistically something about 1 per year
- RAID 5 on SATA disks, home/soho servers arrays failed in 70% of during rebuild of faulted drive, 10% because of forgot to change faulted drive during some months, 20% rest of things but it happens 1-2 times a year on old drives and 2-4 times a year on newest drives, yep older drives that working more than a year are much more reliable than new drives, it's quite importance argument for a home RAID enthusiasts :))
- so on my lab I have about 30x sata hdd, 8x ssd, and I have 1x RAID6 of 6xHDD, 2x RAID 5 of 5xHDD with spare, 1x RAID-Z (aka RAID5) ZFS of 5xHDD with spare + ssd for cache and logs, 1x RAID10 of 4xHDD, and from 5 years to now I have 1 loose of data on RAID5 during rebuild on SATA drives, second drive failed 3 times and after 4th time RAID 5 volume was destroyed because of inconsistent of the data (that was a RAID controller without BBWC)
- if you have more than 5 disks in RAID5 (from performance perspective I recommend 5, 9, 13 etc, mean 4+1, 4+4+1, 4+4+4+1, etc) the possibility of faults increase dramatically (with adding next 4 disks), so think for example how it's possible to fault array of 9 disks when 1 disk failed... you have probability 8 times bigger than on 1 single drive!!!! so it's huge.
- RAID6/RAID-Z2 is not a bad solution but it need a good RoC controller with BBWC (I recommend especially for RAID-Z/RAID-Z2) to be fast as a RAID5/RAID-Z and you should to use disks with a key: 4+2, 4+4+2, 4+4+4+2 (for optimal performance of course)
- additional, from my statistics (about):
- 1 of every 40x new FC drives failed in a week, 2 of such drives failed in a year
- 1 of every 34x new SAS drives failed in a week, 2 of such drives failed in a year
- 1 of every 14x new SATA enterprise drives failed in a week, 2-4 of such drives failed in a year!!!
- 1 of every 8x new customer cheap SATA drives failed in a week, 2-4 of such drives failed in a year
- 1-2 of every 12x 1year SATA drives failed in next year

so now you may have some pictures that how often RAID5 may fail and because of what issue, so what you can expect from RAID5/RAID-Z (RAID-Z is little more secure if properly configured because of rebuilding only a data, not a whole drive, but even on 2TB hddz it will take more than 24h when we are in danger)

kind regards
NTShad0w
 

djdefekt

Cadet
Joined
Jan 29, 2013
Messages
1
Tekkie, mates,

(snip)

- RAID6/RAID-Z2 is not a bad solution but it need a good RoC controller with BBWC (I recommend especially for RAID-Z/RAID-Z2) to be fast as a RAID5/RAID-Z and you should to use disks with a key: 4+2, 4+4+2, 4+4+4+2 (for optimal performance of course)
- additional, from my statistics (about):
- 1 of every 40x new FC drives failed in a week, 2 of such drives failed in a year
- 1 of every 34x new SAS drives failed in a week, 2 of such drives failed in a year
- 1 of every 14x new SATA enterprise drives failed in a week, 2-4 of such drives failed in a year!!!
- 1 of every 8x new customer cheap SATA drives failed in a week, 2-4 of such drives failed in a year
- 1-2 of every 12x 1year SATA drives failed in next year

so now you may have some pictures that how often RAID5 may fail and because of what issue, so what you can expect from RAID5/RAID-Z (RAID-Z is little more secure if properly configured because of rebuilding only a data, not a whole drive, but even on 2TB hddz it will take more than 24h when we are in danger)

kind regards
NTShad0w

NTShadow, thanks for sharing your insight and experience with us.

I was wondering if you had a recommendation for someone like myself with an enclosure housing 4 x 2TB drives (ZFS under FreeNAS on a HP N36L Microserver)?

I would like my data to be as secure as possible (though I know not to treat this as a backup) and I'd like to make sure I don't do anything to inadvertently degrade performance beyond the relatively low level I will get with RAIDZ/RAIDZ2.

As far as I understand 3+1 in RAIDZ results in a "bad" stripe width (43k) where 2+2 in RAIDZ2 results in a "good" stripe width (64k) and is due to the way parity is calculated (and presumably because 64k is base-2). On this basis RAIDZ2 looks like a better choice despite the fact that I lose 50% of my available storage space to parity.

Is there anything I should know about a 2+2 RAIDZ configuration that would cause you to advise me against using it?

Thanks in advance,

DJ Defekt
 

Mike77

Contributor
Joined
Nov 15, 2014
Messages
193
Hi,

It's been a while since there have been any posts on this topic. Has anything changed? I'm testing a couple of new drives at the moment (forr the comming days probably). But after that (if all test results are good) I want to setup a Raid-Z2 with 7x4TB (WD Red) drives. Is this a good idea, or should I really go for Raid-Z3?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I don't see any problem with that ;)
 

Mike77

Contributor
Joined
Nov 15, 2014
Messages
193
I don't see any problem with that ;)

Thanks!! Not better to split it into 2 dev's, or use Z3?

And I just read that the new stable FreeNas build needs a 8GB thumb drive. Is this a minimum?
 
Top