ZFS or HW Raid 6 for - 16 drive build.

Status
Not open for further replies.

NAS777

Dabbler
Joined
Feb 15, 2017
Messages
28
I have a new super micro 4u server with a high quality hardware raid (lsi) and I also have 16 wd red drives.

I have been reading a lot about the best configuration for my needs but I am still having trouble determining which is mine.

I mostly care about read speeds as writes are not as common. Secondly I care about max space. Third, I care about redundancy and while this isn't my backup, I would like to be able to recover easily if need be.

Is freenas the best option for my needs? I have looked at raid2z but I am unsure how many vdevs I should divide the 16 drives I to. Or if i should use the server grade lsi card for a raid6 setup.


Thanks for any advice you can give me
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The question is, what do you want to do with the server, exactly?
 
Joined
Apr 9, 2015
Messages
1,258
Hardware raid has a lot of quirks. If you intend to use FreeNAS as the OS hardware raid is the worst choice you can make. It will work until it just doesn't and you will basically be SOL.

As far as how to setup the drives in a ZFS pool it depends on who you ask and what you want as was already said. With 16 drives they would easily split into two vdev's as a RaidZ2 or RaidZ3. If you are more worried about redundancy the latter will be the choice and space the former. But it also depends on what you want to do with the system, RaidZ is great for data storage and two vDev's of 8 drives in either a RaidZ2 or RaidZ3 would easily fill a gigabit connection and do pretty well with a 10gigabit connection. However if you are hosting VM's you would want multiple vDev's of either two or three drives mirrored for the raw speed and IOPS.

With media and files being served to clients I would personally do two vDev's in a RaidZ3 using 4TB drives or larger. The reason for RaidZ3 is that if one vDev fails the whole pool is lost so IMHO the more redundancy the better for the sake of the pool.
 

NAS777

Dabbler
Joined
Feb 15, 2017
Messages
28
thanks both of you for the reply.

it is just a file server, dvr, camera storage.

i will turn the lsi card (which I need for the back plane) into JOBD and let the OS manage the disks.

i am curious why people divide it up into to 2 vdevs if they are using raidz3.
for me it seems that either 1 vdev consisting of 16 drives on raidz3 is enough or even better
2 vdev consisting of 8 drives each with two spares on raidz2
 
Joined
Apr 9, 2015
Messages
1,258
JBOD is not the same as IT mode. If it is a raid only card with memory and battery backup etc. you would be better served grabbing a 9211 8i or equivalent flashed to IT mode. If the card you have can be flashed to IT mode then do it.

The reason for multiple vDevs is that once you get over a certain number of drives you increase your potential problems. Around 12 drives is the most you want to stick into a single vDev. And the reason for more redundancy is the problem that if a single vDev goes down the entire pool is lost. Also read http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/ Imagine having all of your data just disappear like a fart in the wind because of drive failures on a single vDev that surpass your redundancy. The more important the data the more redundancy you want available and a hot spare or spares while being great to have still takes time to rebuild unlike an array with three drive redundancy.

A lot of this is up to personal preference as well and planning for the future. Are you going to be able to replace drives as soon as they go down or will you have to wait until one shows up after an order is made(days or even weeks later.) Are you going to have automated backups of your data monthly, weekly, daily, hourly? And how much important data are you going to lose should the entire pool crash based on your backup schedule?

I went with the raidZ3 with 4TB drives because I expect to add a second vDev with at least 6TB drives if not 8TB drives in a few years. After that I will start replacing my current 4TB drives one at a time to expand the pool farther likely with 8TB drives or 10TB drives. I do not want to lose the pool now or in the future even with expansions.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I went with multiples of 8 drives in Raidz2 in a similar situation.

Flash the card to IT mode before using with FreeNAS

Raidz2 is similar, but better than raid6

Two Raidz2 vdevs will give you double the iop performance of a single raidz3 vdev
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
For the same reason that I say raidz1 is fine for most uses, unless you're a clown who has no backups, I also say raidz2 is just as good as raidz3. Unless you won't be able to replace a failed drive for months.

The two primary reasons are:

First, every scrub is equal to every resilver. If your drives are able to survive every scrub, they are just as likely to survive a resilver when any one drive does fail. The RAID mentality is that scrubs never happen, so it is unknown if any particular drive can survive a rebuild, therefore latent failures are discovered during a rebuild.

Second, ZFS is not RAID. ZFS stores metadata twice, and ZFS knows the purpose of every block. While RAID may completely fail due to a failure to rebuild a block, ZFS can simply note it and carry on.

In addition to all of that, people have lost raidz2 and raidz3 pools. It is rarely/never due to organic correlated hard disk failures. It is due to systemic failures, human error, or undetermined (such as a power failure and the pool is now dead! WTF?). In other words, you better be keeping a backup, and if you have good backups, then Why So Scared?
 

NAS777

Dabbler
Joined
Feb 15, 2017
Messages
28
well I have a smc2108 which to my knowledge cannot be flashed so I will need to get a M1015 or a 9211 8i (suggested earlier). My question is, will the M1015 run this backplane in this supermicro server or do I have to buy another case to power 24 drives?

I have more bad news, I found out that i have 6 segate desktop hdds and 10 wd reds so maybe I am better suited doing mirrors in this config
vdev1 == 1 segate desktop hd + 1 wd red
vdev2 == 1 segate desktop hd + 1 wd red
vdev3 == 1 segate desktop hd + 1 wd red
vdev4 == 1 segate desktop hd + 1 wd red
vdev5 == 1 segate desktop hd + 1 wd red
vdev6 == 1 segate desktop hd + 1 wd red
vdev7 == 1 wd red + 1 wd red
vdev8 == 1 wd red + 1 wd red

until i can replace the segate desktop hd's one by one.
will show up as 8 different partitions or can i configure it to appear as one drive?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
FWIW an M1115 will also work
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
with that 8 drive vdev setup, will it be 8 separate partitions?
Sounds like you need to read Cyberjock's guide. Link in my sig.
 
Status
Not open for further replies.
Top