Planned FreeNAS Build (Looking for Feedback before I finish ordering)

Status
Not open for further replies.

SCS

Dabbler
Joined
Sep 10, 2016
Messages
42
I've run FreeNAS for a number of years now on a very low power AMD E-350 CPU, 16GB DDR3 non ECC RAM, and only the capability of 4 onboard SATA, and limited expansion via 1x PCIe slots after adding a dual port Intel Nic. Great part is it consumes only about 20w with the 4 x 2TB drives.

I've kept good backups to an alternate system for backup purposes only, in case the house of cards came tumbling down around me, luckily I never had an issue. I've had a productive year and I'm brushing up on my 80% limit on my storage so I figured it's time to do things right, and retire the Abomination, or at least re-purpose it.

Well I came across a few good deals on used H310 HBA's that I'll flash to it mode, and was able to pick up 10 of the HGST Deskstar NAS 3.5" 4TB 7200 RPM for extremely cheap around last Easter that I'm strongly leanign towards RaidZ3. Well last year was good and I've got some spare change laying around and I'm looking to finally take the plunge on finishing the system. I knew I wanted to run two of the HBA's and a 10Gbps SFP+ NIC for faster transfers between my Main Rig and the ESXi box. Everything else will be on 1Gbps copper. Using 40TB of raw disk space I wanted at least 32GB of RAM but with looking to play with duplication, and adding additional drives as my current case will support 22 x 3.5" Drives and 6 x 2.5" drives I wanted a much higher ram ceiling then many similarly priced boards would support.

System Build via Google Sheet

Motherboard: Supermicro ATX DDR4 LGA 2011 Motherboards X10SRL-F-O
HBA: (2) Dell H310/LSI 9211-8i
Processor: Xeon Processor E5-1620 v4
CPU Cooler: Noctua NH-U9DXi4 90mm SSO2 CPU Cooler (I was looking at other coolers but I like the push pull configurations, as well as the silence of Noctua fans.)
Main Storage: 10 x HGST Deskstar NAS 3.5" 4TB 7200 RPM RaidZ3
Power Supply: Antec Earthwatts 650W ATX12V/EPS12V 650 Energy Star Certified Power Supply EA-650 Platinum
Memory: Samsung M393A4K40BB1-CRC 32GB DDR4-2400 LP ECC Reg Server Memory (2 to start, with eventually populating all 8 slots for a total of 256GB potentially)
or
Memory: Samsung M393A2K40BB1-CRC 16GB DDR4-2400 LP ECC REG Server Memory (2 to start, with eventually populating all 8 slots for a total of 128GB potentially)

So I've looking to get some feedback as the only things I've grabbed so far I saw others on here were using with FreeNAS and was listed as supported. Both the ram listed above is on SuperMicro's compatibility list for this board, just looking for your thoughts on biting the bullet on 32GB DIMM's and starting off with two and adding maybe another set in 6 months or so before playing with iSCSI or DeDuplication.I may start off with two 16GB DIMMS and purchase the 32GB DIMMS thereafter.

So what are you using it for besides a massive porn collection. Well there is no porn collection sadly, but my main use is primarily for storing and delivering my media collection; Movies, TV Shows, Music, Pictures, Books, etc. My girlfriend before I introduced her to streaming services was one whom would buy the box sets of TV Series or Movies, well after many hours of ripping everything is deliverable via a Raspberry Pi and Kodi, and the physical media is in the basement and out of the living room/bedroom.

I run a IT consulting business where I am routinely working on customer machines so I have CloneZilla Images, Operating System ISO, and all of my applications and such. I'm considering having CloneZilla start to dump directly to the FreeNAS box vs to Flash or USB 3.0 External Drives.

I also do 1080p 60FPS video recording that I've done to a local 4 x 1TB Raid 0 at higher bit rate support, that I then copy over to my current FreeNAS machine for actual storage as my standard 1Gbps isn't fast enough to deliver the throughput necessary. This is part of why I'm looking to setup a machine to machine 10Gbps SFP+ Data network between my Main Rig, FreeNAS, and my ESXi Server.

I also plan to play around with having my ESXi server doing diskless short of the CF Card for the OS and hosting my VM's off of the FreeNAS and get the opportunity to play with deduplication, although I expect a lot of my VM disk usage to shrink as I've migrated a lot of my VM's to FreeBSD from Windows and Linux. I'll probably look into rebuilding them under one FreeBSD system via Jails to play around with performance and storage and RAM usage. This again is another reason why I'm looking to deploy the 10Gbps data network, parallel to my copper 1Gbps network.
My current FreeNAS build has 2 x 2TB that I am thinking of setting up as a Raid 10 for the VM's to reside on, although I think I'll move to 6 SSD's eventually in RaidZ2 or Two Mirrored RaidZ vdevs as I suspect that will offer better performance.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The Dell PERC H310 card, they may or maynot work without a custom modification (taping over one of the edge connector pins). This information is on the internet so look into it. The H310 does get hot so plan to have some possitive airflow across the heatsinks on the cards and don't place them up against each other, it needs air.

It sounds like you have some good plans of a serious system.
 
  • Like
Reactions: SCS

SCS

Dabbler
Joined
Sep 10, 2016
Messages
42
The Dell PERC H310 card, they may or maynot work without a custom modification (taping over one of the edge connector pins). This information is on the internet so look into it. The H310 does get hot so plan to have some possitive airflow across the heatsinks on the cards and don't place them up against each other, it needs air.

It sounds like you have some good plans of a serious system.

I ended up taping around those pins and breaking into the girlfriends clear fingernail polish to cover those pins.

I'm currently running it in my Windows box with 8 of the drives to play with.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Your Google Sheet shows two USB sticks for boot devices. Personally, I suggest going to small SSDs... they are a lot more reliable.

For VM datastores, striped mirrors are the way to go. You might consider two pools... one RAIDZ2 for your media storage, another for your VM stuff. This is exactly what I do. You'll also want an SLOG for your VM store (unless you want to accept the risk of disabling sync writes). This needs to be a data-center grade drive. I run an underprovisioned 200GB Intel S3700. You want something with high write tolerance (the S3700 is 10 DWPD for 5 years) and power loss protection. NVMe drives are even better, if you have that level of coin. You don't need to mirror this any longer.

On your L2ARC drive, you need a drive with good endurance but it doesn't have to be quite as lightning quick. Underprovisioning (as listed on your Google Sheet) isn't required here. You also want to get to a fairly high bar on RAM before you worry about L2ARC. At 32GB, an L2ARC may actually hurt, rather than help, since RAM gets consumed for L2ARC metadata. Personally, I've got another S3700 as L2ARC on my VM store... and the hit rate is embarassingly low (0.3%). Not worth it.

What chassis are you intending to run?
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
I ended up taping around those pins and breaking into the girlfriends clear fingernail polish to cover those pins.
Was it tested without that? It is not required on every system. I didn't need to do it on mine, for example.
 

SCS

Dabbler
Joined
Sep 10, 2016
Messages
42
Your Google Sheet shows two USB sticks for boot devices. Personally, I suggest going to small SSDs... they are a lot more reliable.

I was looking at the Supermicro DOM probably in the 32GB range. Boot up speed isn't overly a concern and I've had no issues with the 16GB USB drives I've had so far. I figured mirroring them, and I intend to get two different models so they don't wear out at the same time.

For VM datastores, striped mirrors are the way to go. You might consider two pools... one RAIDZ2 for your media storage, another for your VM stuff. This is exactly what I do. You'll also want an SLOG for your VM store (unless you want to accept the risk of disabling sync writes). This needs to be a data-center grade drive. I run an underprovisioned 200GB Intel S3700. You want something with high write tolerance (the S3700 is 10 DWPD for 5 years) and power loss protection. NVMe drives are even better, if you have that level of coin. You don't need to mirror this any longer.

For power loss the entire "rack" is on a set of UPS's that offer 30+ minutes of run time. I'm currently trying to find a way to manage both my ESXi and FreeNAS to jointly communicate with the UPS to know when to start power down sequences. I know how to do it one or the other, but not both at this time.

I'll have to brush up on the differences between ZIL and SLOG, I thought ZIL offered the functionality and safety that your pointing to be answered by SLOG.

The Intel S3700 seems reasonable. I'll lookninto this once I have a better understand on the ZIL and SLOG differences.

I'm thinking of going 10x4TB RaidZ3 for resiliency and maximum capacity. I'll end up waiting on the VM's being hosted on the system only after I get the primary vdev squared away. This way if something starts to act funny or performance starts to tank I'll know what influenced it. I intended on using some of my sets of 4 drives so I can run the raid 10 striped mirrors. All my current VM's exist on mirrored 1TB drives so space isn't all that fire for the VM's.

On your L2ARC drive, you need a drive with good endurance but it doesn't have to be quite as lightning quick. Underprovisioning (as listed on your Google Sheet) isn't required here. You also want to get to a fairly high bar on RAM before you worry about L2ARC. At 32GB, an L2ARC may actually hurt, rather than help, since RAM gets consumed for L2ARC metadata. Personally, I've got another S3700 as L2ARC on my VM store... and the hit rate is embarassingly low (0.3%). Not worth it.

The L2ARC won't be deployed until later when I get closer to the 64GB or 128GB. My current system only has 16GB and I noticed a huge improvement when I upgraded from 8GB. My hit ratio now is about 93.7% or so. I have no intension of deploying a L2ARC until beyond 64GB, even so I'll probably wait until I hit 128GB just so I could potentially use a larger L2ARC. I really only wanted to play with it with the VM's later as I don't think it will prove overly useful for my other standard workloads vs just adding more ram. Which is normally the concensious anyway.

What chassis are you intending to run?

I have a brushed Aluminum Lian Li v2100 that I've used for over a decade as my file server. Once I get everything installed I'm going to cad out a fan shroud and baffle to direct a set of 140mm fans over the expansion cards, and get it 3d printed for the direct airflow for the cooling they were designed for.

I intend to add two 3 x 5.25" bay to 5x3.5" hard disk bays. I'll end up installing a 6x2.5" bay in the final 5.25" bay.

Was it tested without that? It is not required on every system. I didn't need to do it on mine, for example.

I didn't test it before. I'm currently running it in MainRig until I get the file server operational. I couldn't find any documented references of issues with clear coating over them so I did. I had to do the same thing back on my Dell Perc 5i back when I ran hardware raid.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
ZIL and SLOG are the same thing... one being in-pool, the other being on a dedicated device. iXSystems explains it well here:
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

For your VM hosting, keep in mind that you only get the IOPS of the slowest drive in each vdev. So, if you have one vdev of 10 HGST NAS drives, you're looking at ~75 IOPS for the entire pool. If you run striped mirrors, you can easily get 4, 5, or more vdevs. Personally, I run 6x2-way mirrors for my VM world, netting me 450 IOPS max.

The SATADOMs are nice, but fairly spendy.
 

SCS

Dabbler
Joined
Sep 10, 2016
Messages
42
ZIL and SLOG are the same thing... one being in-pool, the other being on a dedicated device. iXSystems explains it well here:
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

For your VM hosting, keep in mind that you only get the IOPS of the slowest drive in each vdev. So, if you have one vdev of 10 HGST NAS drives, you're looking at ~75 IOPS for the entire pool. If you run striped mirrors, you can easily get 4, 5, or more vdevs. Personally, I run 6x2-way mirrors for my VM world, netting me 450 IOPS max.

The SATADOMs are nice, but fairly spendy.

OK, then I overlooked the name changed between using the built in ZIL and offloading that to a dedicated device. No wonder the reading on each seemed so similar.

I was intending to add the dedicated SLOG although i was referring to it incorrectly as a dedicated VIZ when I added the VM storage component. The 10 HGST aren't intended for VM but for my generalized storage for the next several years without having to worry about upgrades. I know such a large vdev is sort of frowned upon because upgrades are expensive given you again need to acquire so many drives.
 
Last edited:

SCS

Dabbler
Joined
Sep 10, 2016
Messages
42
I'm probably looking to pull the trigger on this in the next month, so I thought I'd bump this back up to see if anyone else sees any oversights that have been missed or additional recommendations.

thank you,
 

SCS

Dabbler
Joined
Sep 10, 2016
Messages
42
Everything ordered Look forward to doing a bit of a build LOG to document the performance.

I ended up going with two 32GB sticks as I found a vendor about $80 cheaper per stick. $185 for a 32GB stick off of Supermicro's compatibility page seems to lucrative not to start of with the 64GB of ram.

Just found out my state is starting to charge sales tax from Amazon so I figured I probably ought to grab this now instead of paying 6% more.

Thanks for your comments and feedback.
 
Status
Not open for further replies.
Top