Optimal pool/vdev/drive configuration for 16 drives ?

EPU

Dabbler
Joined
Jul 26, 2019
Messages
12
This will probably be the first of many posts here, so "Hi" :)

I'm new to FreeNAS, although have been running servers and networks at home for many years, previously always using COTS NAS's (typically QNAP). I'm now looking at building a FreeNAS box to start to consolidate a lot of the other systems I have in my house into a single box (Web server, Wordpress site, Plex server, Storage etc). I'm still in the process of putting together the hardware list, but given a significant cost for any build is always the drives, I've a query about the best configuration to use for the setup I have.

My main chassis is a LogicCase SC4136S (4U, short depth, 16-bay hot swap), and I am looking for some advice on how best to configure the array. I'm looking for resilience, and I'm not overly bothered about overall capacity of the array (I know I can always buy larger drives). However, I've had drive failures in arrays before and I'd like to reduce the inherent risk of a failure during a drive rebuild.

Note I have some drives already (2Tb, 3Tb and 4Tb), so going for a single vdev, 16 drive RAIDz2(or 3) setup doesn't really give me the best capacity/drive configuration from the outset (I know it would if I just bought all of the same, but I don't wish to dump all my old drives just yet)

Reading the documentation, I believe I have two main options to give me the highest resilience, a reasonable sized pool, and the flexibility to not need all the drives to be the same size.

So, ignoring cost for a second, is it better to run

Option1
vdev1 = 8 drives in RAIDz2
vdev2 = 8 drives in RAIDz2

Option2
vdev1 = 6 drives in RAIDz2 with 1 hot swap
vdev2 = 8 drives in RAIDz2 with 1 hot swap

Option3
Something completely different from the above (I know I can go RAIDz3, but can't see the inherent advantage in that give the other resilience options available to me)

Option 1 obviously maximises the storage pool, whilst the advantage I see in Option2 is that if there is an issue whilst I am away from home and unable to perform a drive swap, the hot spare will automatically take over from the failed drive. I obviously lose storage capacity, but this could be ameliorated by using larger drives in the first vdev.

Am I on the right track or should I be completely rethinking this ?

Comments/thoughts appreciated.

Thank You
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi Epu,

Welcome to the community.

A single RaidZ-X vDev as large as 16 drives is not really recommended. A single vDev will basically give you the performance of a single drive despite there are 16 of them in the vDev.

Your options with 2 vDevs are interesting. A little more performance but that's it : a --little-- more performance.

How about Raid-10 ? Raid-10 is what will give you the best performance. Either as a pool of 8 mirrors and a cold spare or a pool of 7 mirrors with 2 hot spare.

The main drawback for Raid-10 is reduced capacity but you said that max capacity was not your main goal. So overall, that could well be your best option.

3x 5 drives Raid-Z2 with a single hot spare would give you 3 vDev and 9 drives of usable space. Each vDev is very robust by itself + the hot spare, you would be very solid for sure regarding local hard drive redundancy.

Your option 2 represents 10 drives of usable space but only 2 vDev for speed. A little more space (11%), a little more redundancy (maybe up to overkill here...) but 33% less speed.

Your option 1 represents 12 drives of usable space and same speed as option 2. 20% increase in space, a robust system and it is easy to add a cold spare should the two RaidZ-2 redundancy be too low for you.

At the end, I would choose in order :
Raid-10 8 mirrors + cold spare ; best performance and good enough redundancy for me
Raid-10 7 mirrors + 2 hot spares ; a little less performance but a stronger redundancy
3x 5 drives RaidZ-2 + 1 hot spare ; Much less performance but still better than next options and very robust design
Your option 1 ; Little slow but pretty large and very strong
Your option 2 ; Wasted space and low performance in the name of an increased redundancy over what is already strong enough

So that is my two cents...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This will probably be the first of many posts here, so "Hi"
Welcome! Please read this: https://www.ixsystems.com/community/threads/forum-guidelines.45124
I'm still in the process of putting together the hardware list
Here is some very good information to help you get started:

FreeNAS® Quick Hardware Guide
https://www.ixsystems.com/community/resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev. 1e) 2017-05-06
https://www.ixsystems.com/community/resources/hardware-recommendations-guide.12/

Hardware Recommendations by @cyberjock - from 26 Aug 2014 - and still valid
https://www.ixsystems.com/community/threads/hardware-recommendations-read-this-first.23069/

Proper Power Supply Sizing Guidance
https://www.ixsystems.com/community/threads/proper-power-supply-sizing-guidance.38811/

Please keep in mind that you do not need the latest hardware for FreeNAS to perform perfectly well. At work, I have systems that can saturate a 10Gb network that are still running on Supermicro X7 generation hardware with DDR2 memory. Compatibility is more important than anything else. The hardware recommendations are based on what is known to work reliably.
So, ignoring cost for a second, is it better to run

Option1
vdev1 = 8 drives in RAIDz2
vdev2 = 8 drives in RAIDz2
This is the option I would use for my videos and general files. I have been using two vdevs of six drives in my home NAS for around five or six years now. When I started, I was using re-purposed drives that I purchased from eBay and I had a lot of failures. Some of the drives were around five years old when I started using them. Even when I had two drives in the same vdev go out at the same time, I never lost any data. I keep cold spares ready to put in the system so I don't need to wait for a drive to come from a vendor. I use this for capacity and because it is fast enough for me but it depends on what your need for speed is. More vdevs does equate to more IOPS, so the pool of mirrors would be advisable for hosting virtualization, but if you are just doing mass storage for video media in Plex, two vdevs is fine.
Option2
vdev1 = 6 drives in RAIDz2 with 1 hot swap
vdev2 = 8 drives in RAIDz2 with 1 hot swap
This is not a good choice because of the vdevs being different. All vdevs in a pool should be of the same number of drives. You can have six 2TB drives in one vdev and six 4TB drives in another vdev and that is fine, but having a vdev of six drives and another vdev of eight drives, although it will work, is bad for performance.
I see in Option2 is that if there is an issue whilst I am away from home and unable to perform a drive swap, the hot spare will automatically take over from the failed drive.
Are you away from home for extended periods, days or weeks? As I said before, I have had two drives fault in the same vdev on the same day, within seconds of one another. RAIDz2 is not going to loose any data until the third drive fails. This doesn't happen often. I did have a server at work where I was already replacing (resilver is the term) two drives in a vdev when a third drive started having data errors. I lost a few files that had unrecoverable data errors, but the pool did not fail. Once the first two drives finished resilvering, I was able to replace the third drive and restore the damaged files from backup. ZFS is super resilient. If you are running RAIDz2, it is extraordinarily unlikely that you will have a failure that takes out your pool.
How about Raid-10
We try to not use that terminology because we don't want to confuse people with hardware RAID. We try to always call that a pool of mirror vdevs.
 
  • Like
Reactions: EPU

EPU

Dabbler
Joined
Jul 26, 2019
Messages
12
Thanks for the quick replies and the advice. I have almost all of the hardware already, just need an HBA that is compatible with the backplane of the chassis (the supplier recommended a Highpoint RocketRAID 840A, but that doesn't seem to be a popular choice based on my reading here)

It looks like the 2 x 8 drive vdevs makes the most sense. I can be away for a couple of weeks at a time, hence the concern about the hotswap, but if the considered wisdom is that I should be ok, then I'll go down that route.

Thank you.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Highpoint RocketRAID 840A
I would stay away from anything by Highpoint. We have a system at work that came from the vendor with two of these:
http://www.highpoint-tech.com/USA_new/series_R750-Overview.htm
Tried them, didn't like them and replaced them with LSI SAS Controllers instead.
This is the model I use and it is plenty fast:
https://www.ebay.com/itm/HP-H220-6Gbps-SAS-PCI-E-3-0-HBA-LSI-9207-8i-P20-IT-Mode-for-ZFS-FreeNAS-unRAID/162862201664
You can use a SAS Expander with that the give you the 16 ports you need. This is the model I use:
https://www.ebay.com/itm/IBM-ServeRAID-16-Port-6Gbps-SAS-2-SATA-Expansion-Adapter-46M0997-Firmware-634A/163321588238
but it is important to ensure it is on the latest firmware.
The linked vendor does a great job of testing the hardware and updating the firmware before reselling it.
You could also just use two of the SAS HBA cards, but it is not needed. You can control around 256 drives with a single SAS controller depending on the model.
 

EPU

Dabbler
Joined
Jul 26, 2019
Messages
12
Manage to find 2 x LSI SAS 9207-8i cards on Amazon in the end, along with 4 x 8087/8087 cables for less than the Highpoint, so bought those instead.

Had some fun trying to figure out how to update the cards to the latest firmware (Broadcom's "instructions" aren't that helpful unfortunately). Ended up flashing FreeDOS to a USB key, and wrote some DOS batch scripts to ensure I didn't mistype any of the commands, so now have both flashed to 20.00.07.00 and they seem to work fine.

Thanks again for the advice.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I’d also opt for the dual z2 VDEV in a single pool for your situation.

If you’re willing to upgrade your hard drives you could even elect to go with a single set of drives in a VDEV and then add a second VDEV in the future if you run out of space.

Alternatively, you can slowly upgrade a VDEV in the future by swapping out drives one by one (with a detach command in the GUI), upgrade the drive, attach, resilver, and rinse repeat to the end.

I’ve chosen a single Z3 VDEV since I’m not looking for max performance, just a balance of resiliency vs. energy efficiency. We pay $0.25/kWh so spinning a lot of older drives does add up. My rig runs on 90W during idle and peaks around 200W during boot. With heavy disk use, the system might draw 105W.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
Had some fun trying to figure out how to update the cards to the latest firmware (Broadcom's "instructions" aren't that helpful unfortunately). Ended up flashing FreeDOS to a USB key, and wrote some DOS batch scripts to ensure I didn't mistype any of the commands, so now have both flashed to 20.00.07.00 and they seem to work fine.

Well done @EPU ! If you have anything to share, or are able to endorse any of the information in previous posts here, about the flashing process, it'd be great info to have.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
I have 12 bays in my system at home; 36 at work...

Home; tested:
6 VDEV 2-disk Mirrors = best performance overall
2 VDEV 6-disk RAIDZ2 = decent performance
1 VDEV 12-disk RAIDZ2 = best capacity

Work:
I tested 1-4 VDEV RAIDZ2 with this system populated at the time with 16 disks; using all disks (I'm not a huge fan of hot-spares). Discovered there is a diminishing return; with not much notable after 3 RAIDZ2 VDEV (only about a 10%-15% gain, in our configuration). Decided to split into 2 VDEV x 12 disks, with the thought to populate the remainder of the array with 12 more disks and expand that to 3rd VDEV RAIDZ2.

* My thought on hot-spares; the disk is running and taking the same toll/hours as the operating disks; would rather have the capacity online and have extra disks sitting nearby for swap when there is a failure/event. Built properly (IE, at least double-redundancy); this SHOULD result in a recoverable event with fresh disks. In addition, as a SysAdmin, it's my responsibility to manage that task with urgency regardless of when the alert happens; ideally, the maximum time that 1 layer of redundancy would be removed from the equation would be an hour or less. This is probably not a popular opinion, but it's what I've lived by for 20+ years in the industry...
 
Top