Advice on new FreeNAS build for home

Status
Not open for further replies.

kmtrax

Cadet
Joined
Oct 13, 2014
Messages
1
Greetings, I've been researching for a few weeks now and have been combing over all kinds of forums over NAS solutions for home. I work with storage in IT as my job, so I'm trying to balance what I want to do, to what is realistic to do in a home environment/budget. /evilgrin The home NAS/network has become the staple of the house and the wife and kids simply expect things to work at all times with little to no outages. I almost gave up and just went with a Synology unit, that way if it acted up, I could blame them and not have the wife after me for another frankenbuilt system and it follows the KISS design, however the IT guy in me wants to build something. With all that in mind, I'm looking to upgrade to have a shared storage solution to do more with ESX around HA and fault tolerance on the infrastructure at home.

Existing Clients/workload:
2 workstations, 2 laptops, 2 NUCs running openelec/XBMC, and 2 amazon fire's running XBMC, Cisco SG200-26 switch, 40Mb symetrical internet connection

ESX Server & current storage platform:
Currently I have an ASRock motherboard with an i7-3770 and 32GB ram running ESX 5.5u2 with 10 drives installed in it. It runs approximately 8 or so VMs standard. (pfsense firewall with multiple IPsec tunnels, couple of windows servers, vcenter appliance, kidsplex, sabnzb, and a few linux turnkey appliances are the standard) It gets a few additional here and there as I test various applications along the way.

Storage in ESX Server:
  • 4x 4TB & 3x 3TB WD Reds are RDM mapped into a Windows server for home file sharing for all clients and the VM runs mySQL for XBMC backend, Plex, and Crashplan for cloud backups
  • WD 600GB 10K, WD 150GB 10k, & WD 2TB black edition all formatted as VMFS and house virtual machine files.
Craziest workload today is streaming 4-5 files concurrently, minor SMB file shares for family file access, and SABnzb downloads. There's right at 14.3TB worth of data used across the drives. All of the WD Red data drives in the server have on the shelf mirrors that are updated monthly or so.

Future plans for my ESX hosts are 2x Shuttle SH87R6, Intel i5-4590S, 32GB RAM each to make a 2 node ESX cluster. I'll add quad port Intel Pro 1000 nic to each one with 1 additional slot for future 10Gb nic upgrade.

My future plans for the FreeNAS host are:
  • Supermicro X10SL7 , Intel Xeon E3-1220V3, Crucial 32GB
  • 10x 4TB WD Reds in a single raidz2 vdev (40TB RAW - 32TB usable)
  • using existing Thermaltake Armor VA8000SWA case
  • Corsair 850watt power supply (this might be scaled back, but I already own)
This leaves me with 2 open 3.5" drive slots in the case with my current configuration without changing any drive bays. I've got a few SSD drives lying around that I could use for caching or such, but it sounds like it may or may not help depending on the workload. The SSD drives are Micron P300 SLC based drives, they don't have internal capacitors, but this system will be attached to a UPS and can have the signaling cable and controls for a controlled shutdown. (I'd much rather the storage system cleanly shutdown than the running VMs, but will work to achieve both)

From all the reading, I'm concerned on both the performance on the ESX layer and if the amount of RAM is sufficient since that board tops out at 32GB. The RAM part is probably the easiest to solve in that I can just change to something in the socket 2011 line and ensure I have an avenue to expand RAM easily along the way. The ESX layer is the more complex in my mind. I'm not worried about getting massive performance in VMs as again, this is more of a test lab for my needs, and just file sharing & internet access for the family. I could add 2 drives of some sort (10k or SSD) and do a raid mirror in FreeNAS as a separate vedv/pool just for the VMs. I don't have large capacity demands inside the VMs. Also, for certain VMs, I could essentially have local storage in the ESX hosts as well to utilize. I understand that in most cases, the 1Gb network links will be the limiting factor, but I have plans to at least have 10Gb between the ESX hosts and FreeNAS storage soon.

Like I said, I've been dwelling on this for weeks, any feedback or comments are greatly appreciated, thanks!!

-k
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ultimately the problem you are facing is that you probably require a separate pool for the VM storage, and that you're short on memory - the 32GB you have for the 30TB pool is almost certainly adequate for that pool, but VM storage is a very different (and difficult) beast because of the essentially random nature of the I/O, which works to fragment a pool and drives up memory pressures.

The E5 stuff is crazy expensive compared to the E3 stuff, so basing just off what I've heard here, here's my take on it:

Get the E3. Put a separate SSD pool on it for VM storage. Expect that sooner or later this could see performance fall off and become unpleasant, at which point:
a) move VM's back to local disk or
b) get another E3 +32GB to handle JUST the VM's (probably cheaper than E5 to do that, too!)
 
Status
Not open for further replies.
Top