Tekkie, mates,
I'm new here but I'm little older IT Pro guy, and my main work topics are based on arrays and drives, my private lab have also a lot of drives and raids...
I saw in my career hundreds of arrays, and repair them, install, configure and see how they work, so i saw a thousends of hdd's online and hundreds faulted.
So to all, from my exp I can say about RAID's:
- RAID 5 on enterprise class disks and hw is good for backups, second mirror and less importance data that have daily backup
- RAID 5 enterprise arrays with 5 disks (its common and best performance RAID5 config) crash sometimes, in 60% of time because of forgotting to change failed drive during months or even years, 20% is because of rebuilding faulted drive, 20% rest of crashes, it's happen statistically something about 1 per year
- RAID 5 on SATA disks, home/soho servers arrays failed in 70% of during rebuild of faulted drive, 10% because of forgot to change faulted drive during some months, 20% rest of things but it happens 1-2 times a year on old drives and 2-4 times a year on newest drives, yep older drives that working more than a year are much more reliable than new drives, it's quite importance argument for a home RAID enthusiasts :))
- so on my lab I have about 30x sata hdd, 8x ssd, and I have 1x RAID6 of 6xHDD, 2x RAID 5 of 5xHDD with spare, 1x RAID-Z (aka RAID5) ZFS of 5xHDD with spare + ssd for cache and logs, 1x RAID10 of 4xHDD, and from 5 years to now I have 1 loose of data on RAID5 during rebuild on SATA drives, second drive failed 3 times and after 4th time RAID 5 volume was destroyed because of inconsistent of the data (that was a RAID controller without BBWC)
- if you have more than 5 disks in RAID5 (from performance perspective I recommend 5, 9, 13 etc, mean 4+1, 4+4+1, 4+4+4+1, etc) the possibility of faults increase dramatically (with adding next 4 disks), so think for example how it's possible to fault array of 9 disks when 1 disk failed... you have probability 8 times bigger than on 1 single drive!!!! so it's huge.
- RAID6/RAID-Z2 is not a bad solution but it need a good RoC controller with BBWC (I recommend especially for RAID-Z/RAID-Z2) to be fast as a RAID5/RAID-Z and you should to use disks with a key: 4+2, 4+4+2, 4+4+4+2 (for optimal performance of course)
- additional, from my statistics (about):
- 1 of every 40x new FC drives failed in a week, 2 of such drives failed in a year
- 1 of every 34x new SAS drives failed in a week, 2 of such drives failed in a year
- 1 of every 14x new SATA enterprise drives failed in a week, 2-4 of such drives failed in a year!!!
- 1 of every 8x new customer cheap SATA drives failed in a week, 2-4 of such drives failed in a year
- 1-2 of every 12x 1year SATA drives failed in next year
so now you may have some pictures that how often RAID5 may fail and because of what issue, so what you can expect from RAID5/RAID-Z (RAID-Z is little more secure if properly configured because of rebuilding only a data, not a whole drive, but even on 2TB hddz it will take more than 24h when we are in danger)
kind regards
NTShad0w