12TB / 14TB / 16TB drives use with RAIDZ2 / Z3

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Hi All,

Build: FreeNAS-11.1-U6
System use: Veeam Backup repository / Template VM’s (that are just deployed to our primary storage from there) - no live workloads on the system

We built a FreeNAS storage server in a 24 drive Supermicro chassis last year (With the excellent of advice of some members of this forum) and used 8TB drives arranged in 2VDEV’s of six Drives each both using RAIDZ2. We’re looking to add some more capacity in the spare slots of the chassis and since the only bottleneck to adding more drives at the moment is slots in the case was considering using larger drives for the new VDEV.
The disks we were thinking of using were 12TB / 14TB / 16TB Exos drives that are rated for the vibration of very large amounts of drives per enclosure.

Have been doing a decent amount of googling and not really found that much of a consensus when it comes to using drives that large and if it is an issue for RAID rebuild using RAIDZ2
The scenario I’m aiming to avoid is during a rebuild a second drive fails and then you get a URE on one of the remaining disks or a third full failure. This got me thinking would it be worth using RAIDZ3 for such large drives but haven’t managed to find anything definitive

There is also the issue of mixing VDEV types in the same pool, if there were enough disks in the RAIDZ3 VDEV would add it give roughly the same performance increase of adding another 6 drive RAIDZ2 identical to the existing two RAIDZ2 VDEV’s and therefore give the equivalent of an additional single spindle in the pool ?

could people please let me know their thoughts on this and perhaps point me to some relevant analysis?
 
Joined
Jul 3, 2015
Messages
926
I run quite a lot of 90 bay systems and by default do 15 disk Z3 vdevs x 6 to makeup my pools. Historically this was with 8TB HGST SAS drives then I started using 10TB drives now I'm on 12TB. So from my perspective, I wouldn't be worried about using the drive sizes you mentioned. Personally, I wouldn't go wider than a 10 disk Z2 vdev or a 15 disk Z3 but that's just my comfort zone.

I'm not a big fan of mixing vdev sizes and layouts within a given pool but that's just my thoughts.

PS: for info, my systems resilver within 24 hours (often quicker) of a disk replacement and most are at about 50% capacity so I would guess at full capacity that would be no more than 48 hours.
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Thanks Johnny, so are you using RAIDZ3 with 8TB and larger drives due to the size of individual drives or the large width of your VDEV's (or both)- am trying to get a handle on risk of a VDEV having a disk failure and then additional failures or URE's causing loss of the pool -
 
Joined
Jul 3, 2015
Messages
926
The size of the drive doesn't really come into for me as I need to make these 90 bays offer as much storage as possible so I'm always going to use big drives. So yes the width is the most important factor and after running 10 disk Z2 vdevs for a few years I decided to take the plunge into 15 disk Z3 which I consider effectively the same from a resiliency point of view and have been running these for the last few years without issue.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Even if you have a backup, restoring hundreds of TB takes few days.
Therefore, with RaidZ3, you do not have to worry if a disk failure happens when you are in holiday.
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Even if you have a backup, restoring hundreds of TB takes few days.
Therefore, with RaidZ3, you do not have to worry if a disk failure happens when you are in holiday.
Ah - well this system is the storage location of our backups - so probably not going to back that up too (except a couple of things like Template VM's that we're replicating to another FreeNAS system but the bulk is Veeam backup data and too much of that to back up again.
To note though we have a hot spare in the system and someone is on call every day to deal with a disk failure

I just need to get an idea what the risk is using such large drives with RAIDZ2 and the rationale for using RAIDZ3 / what it's effect on the pools IOPS/throughput would be
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
I think that using RaidZ2 with 6-disks vdev is safe, especially if you have a hot spare or an on-duty technician to deal with a disk failure.
RAIDZ3 need more CPU and memory than RAIDZ3 but it has usually no impact on the IOPS, because IOPS depends only of the number of vdev.

I use 2 x 12 disks in RAIDZ3 with 14 TB disks for my Veeam backup repository.
In my case, the bottleneck is always the source (the VMware datastore).

Note: you should really have a 2nd copy of your Veeam backup.
The reason is written in this small ebook:
The ebook is sponsored by Veeam but applies to any backup solution.
 
Last edited:

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
haha ' Bottleneck - Source 97%' is a Veeam job status i know all too well
The reason i'd errred on using RAIDZ2 with 6 disks and drives that large is various articles about getting a URE or additional disk failure during rebuild that initially were about RAID5 but we're potentially in the area where it applies to RAID6 too
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
If you use your FreeNAS server only as a Veeam target, there are two cases
  1. you have the licence for VMware Storage Motion, then you will probably want to leverage Veeam Instant VM Restore. Therefore you need to have a decent amount of IOPS. You can feed your zpool with 6-disks RAIDZ2 or 8-disks vdevs.


  2. you do not have the licence for VMware Storage Motion, then you will probably never use Veeam Instant VM Restore.
    In this case, 12-disks vdev in RaidZ3 is the most effective setup to feed your zpool, because Veeam does not require many IOPS for the other tasks.
 
Top