SOLVED Best HDD config for general purpose FreeNAS build (I'm desperate)

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
Hi I'm Steve,

and I am planning a FreeNAS system as a central data storage for everything I do at home.
Reading every single official and unofficial hardware guide and learning literally everything about OpenZFS I become increasingly confused what the best hardware for my use case is...
To clarify let me tell you my requirements:
  • Plex Server with ~6-12TB of movies, shows, music (high sequential reads)
  • GitLab Server (don't think high performance matters here)
  • Sonarr, Radarr and the like (occasional high sequential writes)
  • Nextcloud Server (mostly seq. read, sometimes write; little files)
  • iSCSI storage for Proxmox VMs and LXC containers (read this is IOPs intensive)
  • NFS, SMB, AFP shares for me and my family's PCs (mixed workload I think?!)
  • Database storage (mostly random reads/writes very IOPs intensive?!)
  • Clients will connect via 1GbE each (my machines will get 10Gb SFP+ cards some time -> 10Gbit NIC for FreeNAS box)
  • total usable storage should be around 20TB
Let me explain how I plan to use FreeNAS as a database storage:
I really really enjoy the simplicity of running services in Docker. It's just like "Yeah I want this, I need that, run it, done". Therefore I plan on using a powerful Docker host to run my services especially those which require databases. My network will be monitored with Zabbix, I want to host a website inside a Docker container, maybe have an application database for a cool battleships game I plan to develop with an Android app.
All of those require different DBMSs, mainly MySQL, PostgreSQL and MariaDB which is why I want to run them containerized as well.
To achieve that I want to mount my FreeNAS on the docker host and then pass that mount point as a volume into the containers.

Thank you for reading this far, really!!
Now that you know my requirements here's what I got planned so far:
My inital plan was to use a 12-bay 3.5" HP Proliant DL380pG8 server with an initial config of 6 6TB WD Red (Pro) drives in RAIDZ2.
Then I read here (ZFS Pool Performance) that RAIDZ2 has abyssmal IOPs performance, so I looked into using a 6x 2way mirror with 3TB drives which should yield pretty great average performane in seq./rand. reads and writes at the cost of losing 50% of my raw capacity.

Later I thought "Hey why fix myself on 3.5" drives?! I could use 2.5" drives and and buy the 25-bay version of the DL380, right?!". This would give me way more flexibility with storage meaning I could start off with 2x 5-drive RAIDZ2s and adding the 3rd, 4th and 5th RAIDZ2 later as I need more space. This won't fix the IOPs problem with RAIDZ2 but it is expandable.
There's just this giant questionmark left in my head and that is running 25 drives in close proximity... I am a little scared that running 25 normal WD Red 2.5" drives will kill my system due to vibration. I know their 3.5" Red Pros are rated up to 16 drives but there are no "Pro" 2.5" ones.
So my first real question is, how much of a concern is this really for a average moderate usage home-user??? I really don't mind losing those movies or music because they will be backed up anyway or just reripped. The databases will be backed up as well to an external mini-NAS.
I read from one guy who said you could theoretically dampen the drives' vibrations with some rubber or sponge but I know this is more a hack than it is a solution. What do you think?

What are your recommendations to tackle the IOPs problem? Use a SLOG + L2ARC?
I really have no estimate on how many IOPs a database whose application is just accessed by 1 user needs. Maybe the 500-1000IOPs of the pool is enough, I have no idea :(

The system will have at least 64GB of RAM and can be quickly and cheaply expanded to 128GB. Is there even a need for an L2ARC then? I mean the biggest files I handle are 40GB large.

For now this is everything I wanted to ask and have clarified.

Here's the TL;DR:
  • 64-128GB RAM system with 24-32 Threads, 10Gbit NIC, max. 25TB storage, Xeon E5-24xxx series CPU
  • very mixed workload mostly random/sequential reads and occasional writes
  • streaming, databases, content storage (photos, movies, music)
  • accessed mainly by me alone, family and friends just stream content and access Nextcloud
  • RAIDZ2 or Mirror
  • 12 3.5" 3TB drives or 25 2.5" 2TB drives
  • vibration concerns?! use rubber?


Thank you again for reading this far! Here's the end ;-)
I think this could be useful for future first time builders like myself trying to choose the correct hardware.

Have a great day and stay healthy!

Steve
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
stick with 3.5" drives and ignore the "Pro version supports 16" drive nonsense. Yes there is probably a difference but you do not care at all. Also ignore the pro wd reds unless you really want that extra warranty. Just get regular reds or shuck them.

I'll point out that two things in your list that are harder to deal with is the iscsi and database storage. iscsi needs a pool with lots of freespace to work well and databases need lots of iops usually. I might suggest getting a system with 24 drives and either doing a 4 vdev raidz2 system or a 3 vdev raidz2 system. I would also suggest going with shucked easy store drives and get yourself some cheap WD reds with a larger capacity. You can usually find 12TB wd reds for about $179USD
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
The main advantage of using the Pro vs regular Reds is the performance, 5400 rpm vs 7200. You will need to figure out if that will matter for you or not. I've personally split things into two tiers and am thinking of adding a third. I have a 6 drive raidz2 for bulk storage, Plex, family shares, etc.; and a ssd mirror for VMs. The third layer I'm thinking about is NVMe. I have 10G networking and sata speeds won't allow me to saturate that. It's more of a want than a need.

Thinking specifically about your DB, it really depends on how large it will be. If you have enough RAM to have it fit completely in ARC the performance of the drives backing it aren't as critical on the read side, and a slog device can help some of the write side. You would just have some slowness after a reboot of the system. Right now I have a mirror of Intel DC S4600s backing about 20 VMs over ISCSi and the latency almost never goes over 1ms. Adding a 1 user DB to that probably wouldn't even make them work hard.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
stick with 3.5" drives and ignore the "Pro version supports 16" drive nonsense. Yes there is probably a difference but you do not care at all. Also ignore the pro wd reds unless you really want that extra warranty. Just get regular reds or shuck them.

I'll point out that two things in your list that are harder to deal with is the iscsi and database storage. iscsi needs a pool with lots of freespace to work well and databases need lots of iops usually. I might suggest getting a system with 24 drives and either doing a 4 vdev raidz2 system or a 3 vdev raidz2 system. I would also suggest going with shucked easy store drives and get yourself some cheap WD reds with a larger capacity. You can usually find 12TB wd reds for about $179USD

So you're saying that the anti-vibration sensors etc. on the Pros is just a little too over the top and paranoid for an average home user like me?
Yesterday I read about a guy from 45Drives who measured drive vibration and he found out that the firmer the drives were mounted the less wobble there was (he mounted them to a solid block of granite resulting in 0 vibration).

The problem I am facing is that I have this online store selling refurbished datacenter servers and they only offer 12-bay 3.5" systems or 25-bay 2.5" systems. I honestly lean towards the 25-bay version because of its expandability and lower cost of initial investment. It would also give me the option to add an SSD only pool for the database as well. Why do you think 3.5" drives are superior to 2.5" ones? The smaller ones are said to be lower noise and lower heat.

Something I want to add to the iSCSI part is that I do not have very large VMs. Most of them are very small in size (<64GB) and since the FreeNAS box is my central data storage for every system they won't grow larger so free space shouldn't be much of a problem.

Thank your for replying :)


EDIT: WD only offers 1TB Red 2.5" drives which is way too small. Total of 10TB when using 25 drives in 5xRZ2 arrays...
I will stick with my initial plan of using a 6 drive RAIDZ2 in the beginning, expanding with another array if needed. I just somehow need to solve this IOPs problem. I think there's no way around a SLOG and L2ARC in my usecase, right?
 
Last edited:

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
The main advantage of using the Pro vs regular Reds is the performance, 5400 rpm vs 7200. You will need to figure out if that will matter for you or not. I've personally split things into two tiers and am thinking of adding a third. I have a 6 drive raidz2 for bulk storage, Plex, family shares, etc.; and a ssd mirror for VMs. The third layer I'm thinking about is NVMe. I have 10G networking and sata speeds won't allow me to saturate that. It's more of a want than a need.

Thinking specifically about your DB, it really depends on how large it will be. If you have enough RAM to have it fit completely in ARC the performance of the drives backing it aren't as critical on the read side, and a slog device can help some of the write side. You would just have some slowness after a reboot of the system. Right now I have a mirror of Intel DC S4600s backing about 20 VMs over ISCSi and the latency almost never goes over 1ms. Adding a 1 user DB to that probably wouldn't even make them work hard.

Thank you for answering :)
Personally the performance plus is not that important to me. For streaming movies over the Internet via Plex I won't be able to exceed 26Mbits because of my upload and most movies don't have a higher bitrate anyway.
The array will also only be sequentially written to heavily during the night because I will let Sonarr and Radarr run while everyone's asleep.

So if I understand correctly you use seperate flash-based storage just for running VMs to avoid the hassle of configuring ZIL and L2ARC?
Could I use the drives you mentioned as a SLOG and L2ARC to boost overall performance? Then I'd just have 1 pool that is decently fast and has good IOPs. As long as my VMs' data gets cached correctly but that shouldn't be a problem because it gets accessed all day long.

Regarding databases I dug a little deeper yesterday and found some recommendations from Zabbix themselves. They had an example where their system was monitoring 3000 values every 60 seconds which results in 50 database transactions per second. They calculated this would accumulate around 11GB of data per month. Since I won't be kepping more than 1 year worth of history the database shouldn't grow larger than 132GB. But(!) that is zabbix alone. And I don't see a reason why I would want to keep all of that in my valuable system memory...
IOPs wise I found this (Page 12) PDF stating one should estimate around 30IOPs per transaction. That'd be 1500IOPs for Zabbix alone and maybe twice that for everything else in my network that relies on databases.
So I think I might be better off setting up SLOG and L2ARC or what do you say? :)

And thanks for your recommendation with the Intel drives :) They look very promising regarding IOPs and power-loss protection! Haven't found ones that cheap! Only downside is they won't saturate a 10gig link either. But do you even think it is even noticeable to have 10Gbit?? I mean my largest file copies will be Sonarr and Radarr pushing their downloads to the server... and that is during the night. I don't mind having them take a little longer. What do you say? To 10Gb or not to 10Gb? :D

Thanks for taking your time to read and reply :)
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
I did the 6 drive raidz2 with slog and l2arc for a while. In daily operation it works great. However, after you update and restart FreeNAS it takes forever to get all the VMs back up and running. For me it was at least 15 minutes and sometimes closer to 30 before they were usable. With ssds it takes about 3 minutes, and I'm guessing a good NVMe could do it in about 1. So far I've been able to get used enterprise ssds and NVMes for $100-110/TB. You have to be patient, but it can be done.

As far as 10Gb goes, the main time I see a big difference is doing a lot with mass starts and stops of VMs. That will saturate whatever is the bottleneck. Also it's nice to have for migrating VMs between hosts.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
I did the 6 drive raidz2 with slog and l2arc for a while. In daily operation it works great. However, after you update and restart FreeNAS it takes forever to get all the VMs back up and running. For me it was at least 15 minutes and sometimes closer to 30 before they were usable. With ssds it takes about 3 minutes, and I'm guessing a good NVMe could do it in about 1. So far I've been able to get used enterprise ssds and NVMes for $100-110/TB. You have to be patient, but it can be done.

As far as 10Gb goes, the main time I see a big difference is doing a lot with mass starts and stops of VMs. That will saturate whatever is the bottleneck. Also it's nice to have for migrating VMs between hosts.
Do you mean SSDs and NVMes as SLOG and L2ARC or seperate pools???
I have absolutely 0 experience with NVMe. If I'm not mistaken it is just another protocol like AHCI that communicates over the PCIe bus and requires OS drivers to have the drives work am I correct? It does not require any special motherboard or CPU except if I want to boot off of NVMe?!

I don't start a lot of VMs simoultaneously and I don't migrate them a lot either. And I'm a patient guy. The only thing is that it would be great to prepare the build so that it would just require adding a 10Gb card and be ready to go. But I think I'll just stick with 1GbE because all of my clients just allow 1GbE anyway.

I read the SLOG just has to be as large as the throughput of my link x 5. 0.625GB for 1GbE then. How large was your SLOG? I mean a 240GB NVMe SSD is a little overkill for that. Are there any small, ultra-fast alternatives?

Currently my plan is to use the 6drive RAIDZ2 with a 1TB NVMe L2ARC (currently looking at the Samsung PM983 for that). I am not sure what to get for the SLOG but it will be an enterprise class NVMe as well.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
There's just this giant questionmark left in my head and that is running 25 drives in close proximity... I am a little scared that running 25 normal WD Red 2.5" drives will kill my system due to vibration. I know their 3.5" Red Pros are rated up to 16 drives but there are no "Pro" 2.5" ones.
So my first real question is, how much of a concern is this really for a average moderate usage home-user???
Don't worry about it. There are plenty of folks using high numbers of WD Reds in a chassis with no reports of vibration related issues.
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
Do you mean SSDs and NVMes as SLOG and L2ARC or separate pools???
I have absolutely 0 experience with NVMe. If I'm not mistaken it is just another protocol like AHCI that communicates over the PCIe bus and requires OS drivers to have the drives work am I correct? It does not require any special motherboard or CPU except if I want to boot off of NVMe?!

I read the SLOG just has to be as large as the throughput of my link x 5. 0.625GB for 1GbE then. How large was your SLOG? I mean a 240GB NVMe SSD is a little overkill for that. Are there any small, ultra-fast alternatives?
I had a 500GB Samsung 850 EVO as the L2ARC and a 280GB Optane 900p as the SLOG.

NVMe uses a protocol that is designed to be used with flash storage and is much more efficient for that than AHCI was. With newer motherboards you should be able to boot from NVMe regardless of the form factor, PCIe, U.2 or M.2.

Your calculations sound about right. Even at 10Gb speeds you don't really need a very large SLOG device. Server the Home keeps an updated list of the best devices out there right now for SLOG. Looks like the current top picks are Optane 900p or 905p, Optane P4800X, Optane P4801X or Intel DC P3700 400GB.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Even at 10Gb speeds you don't really need a very large SLOG device
Actually it's more about the speed of your pool disks... if you can't flush the queue out after 10 seconds to the pool, you stop IOPS anyway to wait for it... I saw a calc some years back which is probably still about right... 30GB is already overkill and allows for overhead and keeping 20% free.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
Don't worry about it. There are plenty of folks using high numbers of WD Reds in a chassis with no reports of vibration related issues.
Thank you :) I got a little paranoid by posts of some users and official guides but I think they really lean more towards enterprise use.
 
Joined
Jan 27, 2020
Messages
577
You occasionally read here about dead Red's shortly after their warranty expired, though. Just a heads up to keep that in mind, warranty for standard Red's is 3 years.
Also look out for EFAX 4 - 6 TB Red's, these are SMR drives, which you don't want.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
I had a 500GB Samsung 850 EVO as the L2ARC and a 280GB Optane 900p as the SLOG.

NVMe uses a protocol that is designed to be used with flash storage and is much more efficient for that than AHCI was. With newer motherboards you should be able to boot from NVMe regardless of the form factor, PCIe, U.2 or M.2.

Your calculations sound about right. Even at 10Gb speeds you don't really need a very large SLOG device. Server the Home keeps an updated list of the best devices out there right now for SLOG. Looks like the current top picks are Optane 900p or 905p, Optane P4800X, Optane P4801X or Intel DC P3700 400GB.
As L2ARC any consumer SSD would do right? Doesn't need to have PLP like SLOG?

Optane as far as I recall has special requirements for CPU right?
My particular system will be a Xeon E5-2450L from 2012. Do you think that is too old for FreeNAS? It is a used Proliant DL380 Gen 8 server. It is a pretty great deal at ~450-500$ with redundant power supplies.

Prices on those disks you mentioned (even on ebay) are freaking insane. I would love to keep the price for the entire NAS below 2200$...
What do you think about an Intel DC S3700 200GB? I have read those were very popular and they are very cheap to get. ~$50 for me. I know it's SATA but it has decent IOPs.
 
Last edited:

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
You occasionally read here about dead Red's shortly after their warranty expired, though. Just a heads up to keep that in mind, warranty for standard Red's is 3 years.
Also look out for EFAX 4 - 6 TB Red's, these are SMR drives, which you don't want.
That is very unfortunate to hear :( The drives I have been looking for are those 6TB EFAX ones. WD60EFAX.
What's so bad about them? Which ones would you recommend?

I am currently running 1 6TB drive in a MyCloud, hasn't failed me yet (fingers crossed).
 
Joined
Jan 27, 2020
Messages
577
It was admitted by WD just recently that they sell "NAS-labled" drives that are not conventional MR but Shingled drives. SMR drives are known to cause errors and problems with a multitude of NAS-applications, ZFS/FreeNAS included. There is a good post about it here by jgreco[. EDIT: it was cyberjock.

You can read up on WD's statements on the whole ordeal here.
 
Last edited:

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
It was admitted by WD just recently that they sell "NAS-labled" drives that are not conventional MR but Shingled drives. SMR drives are known to cause errors and problems with a multitude of NAS-applications, ZFS/FreeNAS included. There is a good post about it here by jgreco[. EDIT: it was cyberjock.

You can read up on WD's statements on the whole ordeal here.
What so you think about the WD Ultrastar DCH310 Enterprise Drive? I can get those for around the Same price as WD Red EFAX 6TB.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Steve,

With that many different roles at once, do not consider your Plex and similar features as long sequential reads / writes. When you do many of them at once, they must all be served at once and so the result is an equivalent of random read / write. That means you are back in the world of mirrors.

So 25 drives, 12x mirrors and one spare. Also, as pointed, drives bought at the same time have a tendency to fail at the same time. That spare will help to mitigate that risk.

As for your backups, be sure to cover yourself against physical threats as well. A fire will probably destroy everything : your main NAS, your backup mini-nas and your original-meant-to-be-reripped. For a complete protection, you need an offsite backup solution.

About the vibration, I use Seagate Ironwolf drives. None of my servers use as many as 25 drives, so I never looked for their recommendation there. The point is that Ironwolf drives are meant for such a use case.

Have fun designing your setup,
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
Hey Steve,

With that many different roles at once, do not consider your Plex and similar features as long sequential reads / writes. When you do many of them at once, they must all be served at once and so the result is an equivalent of random read / write. That means you are back in the world of mirrors.

So 25 drives, 12x mirrors and one spare. Also, as pointed, drives bought at the same time have a tendency to fail at the same time. That spare will help to mitigate that risk.

As for your backups, be sure to cover yourself against physical threats as well. A fire will probably destroy everything : your main NAS, your backup mini-nas and your original-meant-to-be-reripped. For a complete protection, you need an offsite backup solution.

About the vibration, I use Seagate Ironwolf drives. None of my servers use as many as 25 drives, so I never looked for their recommendation there. The point is that Ironwolf drives are meant for such a use case.

Have fun designing your setup,
So you're saying with sich a Mixed workload and many sequential reads/writes the "scheduling" of the pool leads to mostly random access?? Which means I have to be prioritizing random throughput and IOPs? Which is what a mirrored pool does?
Not buying all the drives at once isn't much of a problem because I plan on upgrading storage as I need more.

What are your thoughts in using an L2ARC? Would there even be any difference between RAIDZ2 an mirror with such a device???

It is mainly me using the NAS. There won't ever be more than 2-4 sequential reads or writes at any given time.

What would your recommendations be with 25 drives, considering L2ARC and SLOG for RAIDZ2?
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
As L2ARC any consumer SSD would do right? Doesn't need to have PLP like SLOG?

Optane as far as I recall has special requirements for CPU right?
My particular system will be a Xeon E5-2450L from 2012. Do you think that is too old for FreeNAS? It is a used Proliant DL380 Gen 8 server. It is a pretty great deal at ~450-500$ with redundant power supplies.

Prices on those disks you mentioned (even on ebay) are freaking insane. I would love to keep the price for the entire NAS below 2200$...
What do you think about an Intel DC S3700 200GB? I have read those were very popular and they are very cheap to get. ~$50 for me. I know it's SATA but it has decent IOPs.
In my opinion, a consumer SSD works fine for L2ARC. Make sure you get one that has decent performance and endurance. It will get a lot more use than as a desktop OS drive.

I don't think there would be an issue using Optane with that CPU, other than it probably wouldn't work as a boot drive. I personally like the price point of the Xeon v2 servers. They have great performance for the price, and power usage isn't bad.

The S3700 will have twice the latency, half the IOPS, and a third the sequential write speed of the P3700. For a SLOG drive latency is king. Performance isn't cheap.
 
Top