FreeNAS Hardware Selection Advice

Status
Not open for further replies.

Library IT

Cadet
Joined
Aug 1, 2017
Messages
2
I'm investigating hardware for my first FreeNAS build. The function will be the primary backup destination for a small/medium organization. My priorities are high-availability and longevity/future-proofing. Our total volume of data isn't massive. Use case would be storing a versioned list of full system images for all in-house servers and other critical data backups. I'll probably be tempted to run 2-3 bhyve VMs, as well. I'd love to hear some feedback on this idea list - hardware isn't my expertise!

I'm especially interested in knowing if you can see any glaring issues with these selections, or would suggest alternative items if this combo is poorly selected. Thanks much!

 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Looks good, but you don't specify the HBA.

Also, a Xeon E3 v5 system might be a cheaper option, with no real drawbacks for your use case.
 
Joined
Apr 9, 2015
Messages
1,258
May want to get a couple SATA DOM's to boot from since two ports on the board support them with power. Can use the onboard SATA ports up to a point but a SAS card would give you much more expansion options. But you will also probably want a case with more drive bays and an expander backplane if you go that route.

If you are going to use it for backups then RaidZ2 or RaidZ3 would be better than mirrors, especially with drives that large. Mirrors would be good for VM storage but honestly a bunch of smaller drives would be better suited. Also a lot of people have shied away from Seagate drives. WD RED's are the gold standard right now for most people and if you can get HGST NAS drives they can be an upgrade in speed at the same reliability sometimes at the same price.

https://www.backblaze.com/blog/hard-drive-failure-rates-q1-2017/ is good info for drive life and https://calomel.org/zfs_raid_speed_capacity.html has some great info about drive capacity vs speed for pools.

You can play with a VM with a RaidZ pool but it gets a lot faster when you have multiple vDev's which is where mirrors come in. If the data back is the most important I would look at 7 to 9 drives in a RaidZ3 and if the data is really important a hot spare could be added in. You would be able to survive three drive failures and using 6TB drives you would get by a lot cheaper than the 8TB drives. 7 X 6TB drives would net you around 21TiB of storage after overhead or basically the same as you would get with 8TB drives. http://wintelguy.com/zfs-calc.pl but you could survive any three drives failing. Later on when you start playing with VM's you could either drop in a SSD as your VM storage that can be backed up to the RaidZ3 pool or a bunch of 1 or 2TB mirrors.

But everyone has a different way at looking at things, I have a RaidZ3 setup myself. Otherwise things look ok.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
but you don't specify the HBA.
I believe the motherboard has 10 SATA ports. I looked this up because I'm curious about the boot device, if the OP plans to use a SSD here or not. At least there would be a SATA port available for a SSD if desired. BTW to the OP, I highly recommend a SSD. This is for a business and you want it to be reliable so this is the best option, but you will need to mount it internally since you will use up all the drive bays with the spinning rust drives.

EDIT: Also to the OP, have you considered ESXi 6.5? Since you plan to run some VMs then ESXi may be a better option, but it will depend on what VMs you plan to run.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm investigating hardware for my first FreeNAS build. The function will be the primary backup destination for a small/medium organization. My priorities are high-availability and longevity/future-proofing.

Two things, If you want flexibility for the future, I would highly recommend the 4U, 24 drive bay, chassis instead as it will give you room to grow as time goes on. You could then add another 6 drives later very easily and later yet another 6. You have options because the processing horsepower need does not scale up as drastically with more drives as you might think. I have a system at work with 60 drives and the CPU only occasionally hits 50%.
Also, with the 24 bay case, you could setup some 1TB drives in mirror sets for your VMs to keep the VMs separate from the archive data.
The 4U cases also have better airflow options so they are usually not as loud.
 
Joined
Apr 9, 2015
Messages
1,258
Agreed on the 4U cases. Plan for the future when you do your build. It may cost a little more in the beginning but allows you to extend the purchase cost over a much longer time period with expansions. My case only supports 15 drives but I have 7 drives in it right now and can easily add seven more in a couple years when it's needed. It's for my home use but still setup to last at least ten years if not longer.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Agreed on the 4U cases. Plan for the future when you do your build. It may cost a little more in the beginning but allows you to extend the purchase cost over a much longer time period with expansions. My case only supports 15 drives but I have 7 drives in it right now and can easily add seven more in a couple years when it's needed. It's for my home use but still setup to last at least ten years if not longer.
My home units are the older Supermicro 3U systems with 15 front hot-swap bays. I love them (after I changed the fans) because I can so easily swap out failed drives. I had a bunch of drives in the Emily-NAS storage pool that were in the 4.5 year age bracket and I have had to change 7 drives in the past year. I am still rocking two drives that are right around 49000 power on hours. Hot swap wasn't a consideration when I built my first five FreeNAS systems, just cost, but eventually you learn the lesson. Second hand Supermicro chassis from ebay was only $179, even with shipping, and it included redundant power supplies too. The redundant power supplies have saved me down time too. I had one of mine go bad and was able to run on the surviving power supply until a replacement could be shipped to me.
I use this kind of equipment at work but never thought about using it at home until I had to support my own 24/7 server for a few years.
 
Last edited:

Library IT

Cadet
Joined
Aug 1, 2017
Messages
2
Thanks so much for this great feedback. You've given me lots to consider, and all of these suggestions make good sense - less processor, more drive bays, proven drives, RAIDZ2 or Z3, redundant reliable boot media.

@joeschmuck 's suggestion of ESXi sparked a lightbulb. I never considered combining FreeNAS with an ESXi host. If you would, weigh in on this idea:

I have an ESXi server already, with massive amounts of processor to spare, and room for adding drives and RAM. It's currently running ESXi 5.1, and several VMs on 2 drives in a RAID1 (so it's due for maint/reconfig).

What if I upgraded the hardware of this server to add a second controller in passthrough mode, so that I could run a FreeNAS VM in the RAID1 array (handled by a RAID controller), and drives in all the spare bays controlled by the passthrough controller as the data destination for FreeNAS? Obviously, this will confine the growth of FreeNAS' size to the limitations of this hardware. I'm happy to accept these limitations, as I know I'll have it in my budget to replace the ESXi host within a few years, so this would be a short-to-mid term solution (as opposed to buying dedicated FreeNAS hardware for long term use).

Existing ESXi Host specs:
Dell PowerEdge R520 (11th gen) 2U, with 8 x 3.5/2.5 hotswap bays (quick specs video)
PERC H310 integrated RAID controller (LSI SAS 2008 OR Dell-342-3528) (currently controlling 2 SAS drives in RAID1)
Dual Intel Xeon E5-2430 2.20GHz processors
32GB RDIMM 1333 MT/s low volt single rank ECC registered RAM (Dell-317-9649) (plan to increase to 64GB)
Internal dual SD module with pair of 1GB SD cards for redundancy (these run ESXi)
VMware ESXi v5.1 U1 embedded image on flash media (plan to update to ESXi 6.5)
Onboard Broadcom 5720 Dual Port 1Gb Ethernet (Dell-430-4715)
Intel Ethernet I350 QP 1Gb server adapter (Dell-430-4444)
PCIe slots available (one currently filled with Intel adapter)

Questions I'm diligently googling to find answers to:
  • Am I right in thinking there should be no max hard drive size limit imposed by the R520 for my FreeNAS data drives, as long as they are controlled in passthrough mode?
  • Apparently, the H310 (LSI SAS 2008) supports both RAID and passthrough mode. The other PERC officially supported by the R520 is the H710/H10p (LSI SAS 9266-8i). According to FreeNASers, the H710 does NOT support passthrough mode. With that problem in mind, is it possible to add a H710p Adapter (in a PCIe slot), configure it as the primary RAID adapter for the RAID1 VMs, and reconfigure the H310 in passthrough mode for the ZFS drives? Or would it be better to leave the H310 as the RAID controller, as add an LSI-9207-8i as the passthrough controller in a PCIe slot? I've seen mixed discussions on FreeNAS forums about the reliability and performance of the PERC H310 when flashed to passthrough mode.
  • How will I connect both controllers to the backplane? I am having trouble thinking of how to google this question. I've yet to see a tutorial or example of configuring two controllers for one set of hotswap bays. This is my inexperience talking.
  • Am I going to negatively impact the performance of my VMs on this host, or of FreeNAS, by combining them on this hardware? The processor won't be a bottleneck. Adding memory is no problem. I do have six 1Gb NICs on two cards, so it's possible I could dedicate the Broadcom card just to FreeNAS, retaining the four Intel NICs for VM traffic. VMs are primarily several moderately busy private web servers and DNS servers, as well as a failover firewall. Is there a weak point lurking here?
Having a RAID controller and a non-RAID controller both in one server is the aspect I really don't know how to evaluate in advance for potential configuration or performance problems.

Thanks again for the tremendous feedback.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thanks so much for this great feedback. You've given me lots to consider, and all of these suggestions make good sense - less processor, more drive bays, proven drives, RAIDZ2 or Z3, redundant reliable boot media.

@joeschmuck 's suggestion of ESXi sparked a lightbulb. I never considered combining FreeNAS with an ESXi host. If you would, weigh in on this idea:

I have an ESXi server already, with massive amounts of processor to spare, and room for adding drives and RAM. It's currently running ESXi 5.1, and several VMs on 2 drives in a RAID1 (so it's due for maint/reconfig).

What if I upgraded the hardware of this server to add a second controller in passthrough mode, so that I could run a FreeNAS VM in the RAID1 array (handled by a RAID controller), and drives in all the spare bays controlled by the passthrough controller as the data destination for FreeNAS? Obviously, this will confine the growth of FreeNAS' size to the limitations of this hardware. I'm happy to accept these limitations, as I know I'll have it in my budget to replace the ESXi host within a few years, so this would be a short-to-mid term solution (as opposed to buying dedicated FreeNAS hardware for long term use).

Existing ESXi Host specs:
Dell PowerEdge R520 (11th gen) 2U, with 8 x 3.5/2.5 hotswap bays (quick specs video)
PERC H310 integrated RAID controller (LSI SAS 2008 OR Dell-342-3528) (currently controlling 2 SAS drives in RAID1)
Dual Intel Xeon E5-2430 2.20GHz processors
32GB RDIMM 1333 MT/s low volt single rank ECC registered RAM (Dell-317-9649) (plan to increase to 64GB)
Internal dual SD module with pair of 1GB SD cards for redundancy (these run ESXi)
VMware ESXi v5.1 U1 embedded image on flash media (plan to update to ESXi 6.5)
Onboard Broadcom 5720 Dual Port 1Gb Ethernet (Dell-430-4715)
Intel Ethernet I350 QP 1Gb server adapter (Dell-430-4444)
PCIe slots available (one currently filled with Intel adapter)

Questions I'm diligently googling to find answers to:
  • Am I right in thinking there should be no max hard drive size limit imposed by the R520 for my FreeNAS data drives, as long as they are controlled in passthrough mode?
  • Apparently, the H310 (LSI SAS 2008) supports both RAID and passthrough mode. The other PERC officially supported by the R520 is the H710/H10p (LSI SAS 9266-8i). According to FreeNASers, the H710 does NOT support passthrough mode. With that problem in mind, is it possible to add a H710p Adapter (in a PCIe slot), configure it as the primary RAID adapter for the RAID1 VMs, and reconfigure the H310 in passthrough mode for the ZFS drives? Or would it be better to leave the H310 as the RAID controller, as add an LSI-9207-8i as the passthrough controller in a PCIe slot? I've seen mixed discussions on FreeNAS forums about the reliability and performance of the PERC H310 when flashed to passthrough mode.
  • How will I connect both controllers to the backplane? I am having trouble thinking of how to google this question. I've yet to see a tutorial or example of configuring two controllers for one set of hotswap bays. This is my inexperience talking.
  • Am I going to negatively impact the performance of my VMs on this host, or of FreeNAS, by combining them on this hardware? The processor won't be a bottleneck. Adding memory is no problem. I do have six 1Gb NICs on two cards, so it's possible I could dedicate the Broadcom card just to FreeNAS, retaining the four Intel NICs for VM traffic. VMs are primarily several moderately busy private web servers and DNS servers, as well as a failover firewall. Is there a weak point lurking here?
Having a RAID controller and a non-RAID controller both in one server is the aspect I really don't know how to evaluate in advance for potential configuration or performance problems.

Thanks again for the tremendous feedback.
There are some users on the forum that are running FreeNAS in a VM, and some have even posted writups on their configuration, but it is not a supported configuration and many people will advise against it due to the various problems involved and how those problems might be destructive to your data.
The Dell H310 controller needs to be flashed with the IT mode firmware to be fully supported in FreeNAS.
I really can't address some of your concerns because I have not tried to run the FreeNAS virtualized. Perhaps someone else will comment.
I have run a SCSI controller and a RAID controller in the same system and they don't have a problem working together even if they are being addressed by the same operating system. If you pass one through the ESXi into the virtualized OS, it may present problems I am not aware of.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The basic trick to running FreeNAS in a VM is to pass-through a PCI HBA to the FreeNAS VM. You can then boot the FreeNAS VM off an ESXi datastore vmdk.

And then it works just fine.

And you can even pass back the storage to ESXi as iSCSI or NFS through a vSwitch, and it runs at >10gbps

A good example of the idea:
https://b3n.org/freenas-9-3-on-vmware-esxi-6-0-guide/

Works with 6.5 just as well.

I have no specific information on your hardware, but I think I've seen people having issues using some slots in Dell servers.
 

Linkman

Patron
Joined
Feb 19, 2015
Messages
219
. . .
I have no specific information on your hardware, but I think I've seen people having issues using some slots in Dell servers.

I believe this is due to a flashed controller not being usable in the dedicated storage slot, so you need to take into account the loss of a slot if you go that route.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@joeschmuck 's suggestion of ESXi sparked a lightbulb. I never considered combining FreeNAS with an ESXi host. If you would, weigh in on this idea:

I have an ESXi server already, with massive amounts of processor to spare, and room for adding drives and RAM. It's currently running ESXi 5.1, and several VMs on 2 drives in a RAID1 (so it's due for maint/reconfig).

What if I upgraded the hardware of this server to add a second controller in passthrough mode, so that I could run a FreeNAS VM in the RAID1 array (handled by a RAID controller), and drives in all the spare bays controlled by the passthrough controller as the data destination for FreeNAS? Obviously, this will confine the growth of FreeNAS' size to the limitations of this hardware. I'm happy to accept these limitations, as I know I'll have it in my budget to replace the ESXi host within a few years, so this would be a short-to-mid term solution (as opposed to buying dedicated FreeNAS hardware for long term use).
Since you already are using ESXi the you may already be aware of the caution and disipline required to prevent this from biting you in the rear. Here are the issues you face... 1) When you reboot or shutdown your ESXi server, FreeNAS goes away. 2) When you reboot or shutdown your ESXi server, FreeNAS goes away. 3) When you reboot or shutdown your ESXi server, FreeNAS goes away. Okay, I think that is covered. Well not really, there are some other obvious things such as, shutdown the FreeNAS VM before shutting down the ESXi server. While locking the RAM is not required, it has been promoted a lot to lock the RAM in ESXi so no bad things happen. I have been running a second ESXi and FreeNAS without the RAM locked for about a year now with no ill effects but it's a test on my part, I would recommend locking the RAM to anyone doing this unless you wanted to take the same risk I am. I know FreeNAS works fine on ESXi 6.0 and 6.5 becasue I'm using both, I don't see why it wouldn't run on 5.1. My VM version is 11 for both machines for the FreeNAS VM. And I have a long thread in the Off Topic section called My Dream System (I think) which has a lot of details on my journey through building up a proper ESXi system. If you have lots of ESXi experience then you will see a lot of obvious things but I was learning and being safe.

The link @Stux provided above is one of the references I used as well and it gets a new person to ESXi through most of the setup. My thread also discusses how to tie in an UPS to shutdown things properly.
and reconfigure the H310 in passthrough mode for the ZFS drives? Or would it be better to leave the H310 as the RAID controller, as add an LSI-9207-8i as the passthrough controller in a PCIe slot?
I don't care for the Perc H310 much, keeping in mind that I'm a bit biased here, I would recommend you leave your ESXi system as it is, operational. Add an HBA to it that would work with FreeNAS and if it's a RAID controller then it needs to be flashed to IT mode of course. Put it in pass-thru and you should be fine. One thing to note, once you place your add-on card in the system and configure the VM to the pass-thru, don't move your add-on card or the pass-thru will show up on a different address and you will need to reconfigure the ESXi VM. Lesson learned.

Am I going to negatively impact the performance of my VMs on this host, or of FreeNAS, by combining them on this hardware? The processor won't be a bottleneck. Adding memory is no problem. I do have six 1Gb NICs on two cards, so it's possible I could dedicate the Broadcom card just to FreeNAS, retaining the four Intel NICs for VM traffic. VMs are primarily several moderately busy private web servers and DNS servers, as well as a failover firewall. Is there a weak point lurking here?
First, if done properly you will not degrade the performance of your current VMs. It is very possible to actually improve your VM performance but I don't have any details on how your system is configured.

Second, I would not use a Broadcom NIC for FreeNAS but rather use one of the Intel NICs. If you create your network structure properly then you may be able to use only one or two NICs for your entire machine but then again, I don't know your LAN/WAN setup. I have two Intel NICs in my machine and I have one in use, everything else is using intrernal vswitches in ESXi. Why do I di this, well I only have one LAN and having two independant NIC on the same LAN when using ESXi just doesn't make sense to me. It's the power of ESXi. But if you have a WAN connection than that would be a second NIC port of course.

Third, When using internal vswiches and virtual NICs you can increase the data flow rates between VMs like crazy! Use VMXNET3 virtual NICs where your VM software will support it, you won't be sorry.

Obiously your gained performance will be based on your configuration of ESXi and your use case.

And how will you incorporate the hardware with the backplane you have, good question, I have no idea.

Good luck on your build, I hope it goes well for you.
 
Status
Not open for further replies.
Top