Newbie trying to understand best practices for designing vdevs

SRussell

Cadet
Joined
Dec 18, 2019
Messages
3
My current hardware inventory:
SuperMicro SC216
SuperMicro 847
19x 1.6TB SATA Intel 3500
20x 10TB SATA HGST

Current setup:
Synology 12 bay NAS
6x 10TB SATA HGST BRTFS Raid6
16TB of use

My plan is to move 8TB of storage to a RAIDzN setup on the Intel SSD drives. I will also add numerous VMs to the JBOF. I expect to stay under 16TB of space on the JBOF. My expectation is to consume around 40-60TB on the HGST JBOD.

Now the confusion has started. I have read so many articles and posts about 'best practices' that I feel I have entered analysis paralysis.

1. How do you begin to determine drive setups based on drive profiles? How do account for drives when you are looking at 2/4/8/10/14TB and potentially larger drives? At what point in drive size do you say we need to move to RAIDz3 or run RAID60?
2. What is a safe setup for platters and in what quantity? e.g. Is an 8x drive RAIDz2 the standard or does that change based on drive size, cache, SATA vs SAS, and what if a drive is SMR or not SMR... how do you begin to factor all of that?
3. Does the triad of performance, capacity and integrity change when you move to flash storage? Does it change again if using enterprise flash?
4. Does the use of a hot spare change with drive types? My guess would be there is not an issue leaving a hot spare flash drive but I do not think it would be a good practice to leave a platter SATA drive in a hot standby mode.

# I had posted the above on another forum. After reading Introduction to ZFS I think I can answer a few of my questions but still unsure.

It seems like 8 drive in Raidz2 is pretty good for performance and a 9 drive Raidz3 for better integrity.
I was under the impression that power on hours for rotational drives will degrade the drive over time so I am skeptical about a hot spare platter.
Could not find any data relating to SMR but drive sizes over 6TB should always be mirrored or raidz2/3.
Could not find any data that directly addresses flash storage.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
There is no standard setup for vdevs, it depends on your needs

iXsystem RAID-layout example for a TrueNAS X10-HA with 11 x 1.9 TB flash disks

VirtualizationFile ShareDatabases
2 x 5-wide Z11 x 10-wide Z25 x Mirror
15 TB Usable15 TB Usable 9TB Usable
  1. The setup depends mainly on usage profile (do you prefer IO or capacity?). For disks larger than 6TB, RAIDz1 should be replaced by RAIDz2 to avoid staying too long time without redundancy during a resilvering. For disks larger than 8TB, I even prefer RaidZ3 over RaidZ2.
  2. There is no standard. I personally go up to 16 disks in Raidz3 for backup application (Veeam) . Avoid SMR disks, they are barely cheaper but behave very badly in most workloads.
  3. When you move to flash, you have far more IOPS so you can exchange them for more capacity.
  4. I dislike hot spare (except if the system is on a remote site): I prefer either cold spare (to avoid degrading the drive) or higher raid level (z2 -> z3)
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
VM storage is normally stripes of mirrors, (9) mirror vDev 2X1.6TB in one pool. One extra either hot spare or cold spare.
Storage I would suggest (2) vDev 10x10TB in one pool.
 

G8One2

Patron
Joined
Jan 2, 2017
Messages
248
I would suggest smaller vdev's with multiple mirrors. The more mirrored vdevs you have the higher the IOPS you benefit from. A Z2 or Z3 pool will give you some redundancy in case a drive or two fails. Any one vdev will be limited to the transfer speed of the slowest drive. So having multiple mirrored vdevs both increase your pool size and scales in IOPS performance.
 
Top