Disk Layout

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
in terms of speed versus reliability, which options would be a better choice? 1 pool, with the pool having a 2 raidZ1 vdev? so 2 x (3 data disk and 1 Parity) versus 1 pool with the pool having 4 mirror vdev ? so 4 x (1 data disk and 1 mirror). I plan to start with 8 disk, with potential to grow to 16. My assuming is that it would be easily to grow the pool as all I have to do is just add in a mirror vdev versus a vdev of 4disk in raidz1.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Please review the following:

Terminology and Abbreviations Primer
https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
in terms of speed
If you are after IOPS, you want mirror vdevs. Mirror vdevs also are more flexible with regard to expanding capacity because you only need to add two disks.
versus reliability
If you want reliability over time, you should be looking at RAIDz2
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
In short, even RAIDz2 is only temporary as drive size is increasing in size. If you have to pick between RAIDz1 vs mirror, which one would you pick?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I use RAIDz2 for the systems I build both at home and at work. I have had enough instances where two drives failed that I know single drive redundancy is not enough. If you have a need for speed, you need more vdevs, no question about it. You can do three-way mirrors, but I would feel unsafe with one way mirrors. I built a pool like that for testing one time and ran it for a few months with test data only. IOPS scale with vdevs.
At work, I manage servers that store nearly a petabyte of data and they are all running 6 drive RAIDz2 vdevs. From the research and testing I have done, that gives me the most usable space with an acceptable level of redundancy to protect against drive failure.
Understand that performance of the system is limited by network speed and drive speed. If you are running a 1gig network, that is as fast as the data will ever flow, but the IOPS can be starved if you have many small input / output operations because IOPS scale with vdevs. If you don't have a workload that is dependent on IOPS, and you are simply looking for the most storage, it is a different question. You could use a single 8 drive RAIDz2 vdev and add a second vdev later to grow the pool capacity.

The answer depends more on what the purpose of the storage is, but a single drive for redundancy would not be adequate for me whether it was a mirror or RAIDz1. I used RAIDz1 years ago, when I only had 4 drives at 1TB each, but I moved to RAIDz2 when I went to 2TB drives and I have tried RAIDz3 and I have tried having a hot spare. I found the best solution for me was keeping a supply of cold spares. Even the 12TB drives I am using at work resilver in under 12 hours. The speed of resilvering is another reason to keep the number of drives in the vdev low. The system needs to scan the rest of the drives during rebuild and having more drives appears to slow that process down. Rebuilding (resilvering) is on a per-vdev basis. I have had situations where I needed to replace two drives in the same vdev at the same time. A couple years ago, I had two drives fail within seconds of each other. One more drive failure in that vdev would have lost the pool. About five years ago, when I was using hot spares, we had a system loose one drive, rebuild on the hot spare and before the situation was corrected, the two adjacent drives also failed because the initial drive that failed got so hot it killed the neighboring drives. The log recorded the temperature as 145°c on the first failed drive. It was so hot that I couldn't hold it in my hand when I pulled it from the system. Only time I ever had three drives in one vdev fail. Now that we have a fully redundant backup server, I don't worry as much about the primary storage going down.

I see it as a choice about risk of data loss. What is your backup strategy?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
In short, even RAIDz2 is only temporary as drive size is increasing in size. If you have to pick between RAIDz1 vs mirror, which one would you pick?
I would need a very compelling reason to choose either. For me, it is RAIDz2 or nothing.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
terms of speed
Usage? What is the need for speed? What kind of network are you trying to serve? What kind of client is accessing the system?
There needs to be a scale of reference here. What is fast enough?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
with 6 disks raidz2, how are your experiences regarding triple drive failures?
Short answer, I don't think it is likely, if you are using good quality drives. The one time we had a triple drive failure was with WD Red drives and that was four or five years ago.

Several years back, when we had the triple drive failure, we were using 8 drives per vdev and we had two hot-spare drives for every 16 drives in the server. It was a fairly large system with 4 SAS drive shelves each holding 16 drives and the hot-spare drives were in the server that the drive shelves were attached to. That worked fine, but we had 64 active drives and 8 drives that were just sitting there being hot... We could have made another entire data vdev out of the drives that were just sitting around being ready and in the course of a couple years running that way, we only had one event where having the hot-spare even mattered. I monitor my systems pretty closely and (unless it is a weekend) I am able to change a failed drive within a 24 hours because we keep a supply of cold spares and the rebuild only takes a few hours. Your mileage may vary because rebuild time is influenced by the amount of data on the drive, speed of the drive, load on the system, etc.

In the last seven years (at this location) out of a population of over 300 drives, in the nine servers I manage (some have as few as 12 drives) I have only had TWO double disk failures (with no data loss) and ONE triple disk failure, also with no data loss, because of the hot-spare. Keep in mind that I will change a drive for having just one bad sector. Anyhow, I talked to management about how close they came to loosing 20 years of data when that triple disk failure happened and we now have an active backup server so all the data in the main server is replicated in the backup.

No RAID of any kind is a substitute for a backup.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Use case for my NAS are as follow, with backup strategy

1. Family data - Will need backup, either to another dedicated disk (offline) or to the cloud. perhaps 4 TB
2. PLEX - no special backup needed. Some will be backup from family data, most of the data I still have the physical media (DVD / Blu Ray)
3. Lab Data
a. Program / code / iso - this will either be backup with the family data or already upload to github - 2 TB
b. VM data - ephemeral virtual machine. These can easily rebuild as I do my best to automated 90% of everything I do. Further more, other than hand full of VDIs and a dedicated SQL, most of my VM workload are pretty lightweight.

In addition, currently, all my servers and PC are only GB, not 10GB. I am interested to install a dual 10GB on the TrueNAS to ensure the pipe size going back to the iSCSI is sufficient for all the potential endpoints (4 ESX host and few win 10)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I am interested to install a dual 10GB on the TrueNAS to ensure the pipe size going back to the iSCSI is sufficient for all the potential endpoints (4 ESX host and few win 10)
I see the iSCSI in there along with ESX and it kind of changes things. It would be (perhaps) better to make two pools, one for the family data / movies and a separate pool for the iSCSI. The reason why is that iSCSI needs many vdevs (typically mirrors) to get the IOPS to make it perform well, but you said the VM workload is lightweight and ephemeral. By that I am guessing you don't need a large amount of storage and if you loose a disk that causes data loss in the iSCSI pool, it might take a little time to recover but it could be recovered from without too much pain.

When I was running iSCSI for my ESXi cluster (just two nodes) in my home lab I used sixteen mirrored 500GB mechanical drives and I thought it worked pretty nicely. That gave me eight vdevs which was good, but still a little slow.

If I were to do it again now, in a home lab, I would consider using eight SSDs and running with no mirror. For an iSCSI pool I think it would be quite fast and you can backup your VMs to your regular storage pool that can be created separately using RAIDz2 to give better economy of space while still providing data protection against two mechanical drive failures.

You can pickup used, enterprise SSDs, on eBay for pretty low money. Here is an example:
Eight of those would only be $295.92 and give you 1.5 TB of SSD fast storage for your VMs and you could still have another pool to share by SMB for storing the big data like movies that don't need the speed that iSCSI needs.

If I were building this for work, mission critical things, I wouldn't suggest single drives / no mirror, but for a home lab and using SSDs, it should be reliable enough that you may never have a failure, or not for years.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
@Chris Moore One ISCSI vm use isn't the other... While I agree SSD's are prefered, how much and how depends on the load.
Case 1: A home virtualisation server (some pihole, a game or two. nextcloud. All as ESXI virtual machines)
Might get by just using harddrives, 64gb+ ram and some L2arc

Case 2: A small home lab
Requires eiterh:
a. A lot of vdevs
b. A dedicated ssd vdev (or two)
c. Use ISCSI with the filestore option and trick the small-blocksize offload to offload all content to a dedicated small-blocksize SSD vdev

Case 3: Professional-use ESXI
Lots of IOPS. Period.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
In my case, plan to start with 64GB, with option to go to 128GB, L2ARC is easy to do but not sure if it will add much value in my VM use case. Right now, my esx host is using consumer grade HDD and consumer grade SSD in a vSAN configuration without much issue. The case I am using is Fractal Design Define 7 XL, so I have option to add SSD as a dedicated pool but prefer not to go there from out of the gate if I do not have to. Given I will have VDI workload, more concern about the write IOPS (SLOG) more so than Read.
p.s., Since the VM is provision by template, and I powershell scripted everything (from template creation with just the ISO, to the post installation of the VM from template), it will take time, but no additional labor is needed to rebuild the environment if drive crashed. Still would suck, just not as bad ;)
 
Top