FreeNas Build Questions - Pool Size

Status
Not open for further replies.

Donnerschlag

Cadet
Joined
Sep 3, 2018
Messages
7
So I am building a bare-metal FreeNAS with no future uses of VMs.

CPU: Intel Xeon E5-2650L
Motherboard: SuperMicro X9SRi-F
RAM: 4x 16GB DDR3 ECC (CT16G3ERSLD4160B)
HBA: Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20 IT Mode
Hard Drives:
12x 8TB Western Digital RED from EasyStore
1x 500GB Samsung EVO 860 Cache drive


So I was reading that I should not do more than 11 drives in one pool. I was looking at doing 12 drives in a single RAID-Z2. What would you recommend and why?

I will mainly be using this for media files and recordings.
 
Last edited:

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
With those drives, for infrequent accessed storage go with 2x 6 wide RAIDZ2 vdevs. For media storage or other high read applications use 6 x 2-way mirror vdevs.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
For media storage or other high read applications use 6 x 2-way mirror vdevs.
You can do this but its a waste of disks. For media (unless video editing live from the NAS) you're doing small sequential access. This can just as easily be served by a RAIDz2. The best case you would have for striped mirrors is VM storage. VMs (in quantity) generate a large amount of random IO and this is best served by lots of small vdevs.
The low power chips do not save much if any power at idle. Also 8 cores and 16 threads is overkill unless your transcoding a bunch of streams or 4k. Even then there's still an argument for fewer cores and a faster clock.
For a media server, there's not much of a reason to use 64GB of RAM but it won't hurt.
HBA: Dell H310 6Gbps SAS HBA w/ LSI 9211-8i P20 IT Mode
I have the same card I have. It works well just watch the temps if not in a rackmount case.
12x western digital RED from EasyStore
I would do two RAIDz2 pools. It will be ultra reliable and fast for media.
1x 500GB Samsung EVO 860 Cache drive
As a SLOG, if your not using NFS or iSCSI, there's no reason for this and it will not be used by the system
As an L2ARC, you could benefit from a small cache drive if your editing video ON the NAS. For a general 1-5 user media server there's no point. I run 20+ VMs over iSCSI with 32GB of RAM and dont bother. I get amazing performance. See my sig for details.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Donnerschlag, I also agree with;
go with 2x 6 wide RAIDZ2 vdevs.

One of the problems with wide RAID-Zx vDevs is that they can get fragmented. ZFS wants to write full width stripes when possible. If ZFS can't, (small file or small update to larger file), then ZFS will write a smaller width stripe. Over a long time, this can cause more fragmentation than a less wide RAID-Zx vDev.
 
Last edited:

Donnerschlag

Cadet
Joined
Sep 3, 2018
Messages
7
Having two pools of 6 drives in Z2 seems like a waste of space to me. Would 11 drives in Z2 be better? This is only for storing and streaming media. I will have a second server with a more powerful 12 core Xeon doing my Plex and VMs, it will be connected by a 10Gbe ethernet card.

I am already at 25TB of data with my JBOD. I got 64GB of DDR3 ECC Ram since everyone was saying 1GB per 1TB of storage, and I got ECC for better reliability. There will be no VMs on this FreeNAS box, it is only for sharing files.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
You never said how big the WD reds were. Will the files be backed up? How much work would it be to rebuild from backups or replace lost data? These are the questions you need to ask your self.
 
Joined
Jul 3, 2015
Messages
926
I would say if you monitor your server daily/weekly with SMART checks and the rest and have spare disks to hand and the time to replace as soon as one looks sad or even fails then an 11 disk Z2 is fine.

PS: I'd also say a 12 disk Z2 is fine. Personally my max is 10 disk Z2 vdevs and 15 disk Z3 and has been for a while using 8TB, 10TB and 12TB drives. I only use HGST SAS Hellium drives however which I find to be very good indeed. I normally look at the chassis Im filling and decide if I want hot-spares or not and that gives me my vdev size. For example on my 60 bays I do 6 x 10 disk Z2 and on my 90 bays I do 6 x 15disk Z3 as I decided not to have hot-spares. Its worth noting however that I replicate all my systems to another identical box in another location every day.
 
Last edited:

Donnerschlag

Cadet
Joined
Sep 3, 2018
Messages
7
Oops sorry thought I had that. It is 12x 8TB WD Reds.

So you are saying to do 11 disk in Z2 with one cold spare? Or would it be better to have all 12 drives with Z3? Why is 12 drives a bad number?

I do have another 8TB WD Red white label that I can use as a cold spare as well. This is just media files (movies, tv shows, game recordings). It is nothing super important, I will have some of the important ones on a spare 8TB Seagate Archive Drive that will be in cold storage.
 
Joined
Jul 3, 2015
Messages
926
Ideally I would say you want one or two cold spares. Nothing wrong with the number 12 but if you have 12 disks then use 11 and keep on as a spare at least. How many disks can you fit into your system?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The concern is the chance of additional failue during rebuild of a failure. The more and the bigger the drives the higher the chance. Also if you factor all drives have similar manufacturer times and wear, the ods go up dramatically. There is also the argument for performance. The more vdevs the faster.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Ideally I would say you want one or two cold spares. Nothing wrong with the number 12 but if you have 12 disks then use 11 and keep on as a spare at least. How many disks can you fit into your system?
The funny thing about this is that your only saving one disk of capacity over two z2 vdevs. There really is no one correct answer. It's all about risk and performance (capacity, throughput, and iops)
 
Joined
Jul 3, 2015
Messages
926
The funny thing about this is that your only saving one disk of capacity over two z2 vdevs. There really is no one correct answer. It's all about risk and performance (capacity, throughput, and iops)
True but with two 6 disk Z2 vdevs he has no spare. I think cold spares are important as more often than not I replace a drive before it has actually failed and if you don't have a cold spare then you can't do that. But like you said there is no right or wrong really so long as you don't get silly big and then forget to even look at your system.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Having two pools of 6 drives in Z2 seems like a waste of space to me. Would 11 drives in Z2 be better? This is only for storing and streaming media.
...
You can have 2 vDevs in the same pool. So, 2 vDevs of 6 disk RAID-Z2 in a single pool gives better performance than a 12 disk RAID-Z2 / 3, and reduces fragmentation.

If it's truly large files, (Media), and rarely updated, then perhaps a single vDev will work.
The problems are;
  • Replacements can take much longer
  • Longer replacements can risk another drive failure
  • Fragmenation can reduce speed
  • Whence fragmentation has reduced speed, only a full backup and rebuild will restore speed.
To sum it up, 10 - 12 disks in a single RAID-Z2 / 3 vDev is getting a bit wide. Some people have wider vDevs, but their use case may allow it. Some people choose to have a single RAID-Z2 / 3 with 12 disks because that is the amount of slots they have. If it were 20 slots, then it's more obvious to have 2 vDevs of 10 disks.

In some ways, leaving a free slot can assist in disk replacement. ZFS is one of the few RAID solutions that allows replace in place. Meaning if your disk to replace has not yet failed completely, you can install the replacement and tell ZFS to replace the failing disk with the new one. That causes ZFS to read from the failing disk all it's good data, and only bother the rest of the disks when it finds bad blocks. Some people say that this can take longer. Maybe. Some people say that if the failing disk is really bad off, but not dead, this can take a very long time compared to simple replacement. It's a choice that ZFS gives you.
 

Donnerschlag

Cadet
Joined
Sep 3, 2018
Messages
7
Ah gotcha, yeah I have on average 8-12 users on at a time. Most of my files are 6-10GB for 1080p (1/3 of the data) content 1GB for 720p content (3/5 of my data). My game recordings are about 8GB in size.
 
Joined
Jul 3, 2015
Messages
926
This is quite a nice little tool. It doesn't do ZFS Reliability but you can work out about the same odds of a wide Z2 by selecting RAID6.

https://wintelguy.com/raidmttdl.pl

I did mission time 5 years, RAID 6, 10 drives per group and 6 groups. I set the time it took me to replace a failed disk to 168 hours (so one week) using enterprise grade drives and my probability of data loss over the 5 years was 0.0000061720752167.
 

Donnerschlag

Cadet
Joined
Sep 3, 2018
Messages
7
Also one other thing, I will not be using snapshots on this server. It is just for storage.

I think I will do the 2x 6 Drive RAID-Z2. If I did add a SSD for cache is that for read speed? So if users are using the same file it would write it to the cache drive and potentially go faster?
 
Joined
Jul 3, 2015
Messages
926
Also one other thing, I will not be using snapshots on this server. It is just for storage.

I think I will do the 2x 6 Drive RAID-Z2. If I did add a SSD for cache is that for read speed? So if users are using the same file it would write it to the cache drive and potentially go faster?
SSDs can be used for both read and/or write cache but both depend on the system and environment. How are your users going to be writing their data via SMB, AFP, NFS etc?

Also are your users connecting via 1Gb?

Read cache happens automatically in RAM so SSDs can just extend that but often its not required unless you have A LOT of RAM.

Write cache SSDs only work with sync writes so NFS, iSCSI. If you are using SMB or AFP then forget about a write cache.

Also on a 1Gb link I doubt you would even notice the difference.
 
Last edited:

Donnerschlag

Cadet
Joined
Sep 3, 2018
Messages
7
I was leaning more towards CIFS or NFS, I need to research more on the pros and cons on both. All my machines are Windows atm but I might switch to Linux for Plex (90% of the usage of the NAS) so maybe NFS will be better.
 
Joined
Jul 3, 2015
Messages
926
I would say SMB is more universal
 
Status
Not open for further replies.
Top