BUILD ESXi Home Server + FreeNAS

Status
Not open for further replies.

Gunndy

Dabbler
Joined
May 17, 2016
Messages
18
All,

I know this question has been asked before and covered in other posts, and I've read all of them I can find, but I find myself stuck in a decision around upgrading my existing FreeNAS system and pricing of parts.

Currently I have FreeNAS running on a Z79 Chipset with a I3-4130 and 16 GB of Non-ECC Ram (Yeah I know...that's one of the reasons I'm rebuilding....).

However, when I go out and start putting together pricing for parts to replace things, it seems like everything I put together using used/inexpensive server grade parts ends up being WAY overkill for just running FreeNAS alone, and on top of that, but a small investment more compared to single processor + ECC memory solution, I can build a VERY robust ESXi server and run FreeNAS + AD Domain Controller + pfSense + Dedicated Plex server and more.

I'm trying to get an idea of whether I'm setting myself up for failure or not with the following configuration:

PCPartPicker part list: http://pcpartpicker.com/p/38LRNG
Price breakdown by merchant: http://pcpartpicker.com/p/38LRNG/by_merchant/

*CPU: Intel Xeon E5-2670 2.6GHz 8-Core Processor ($65.00 @ eBay)
CPU: Intel Xeon E5-2670 2.6GHz 8-Core Processor ($65.00 @ eBay)
CPU Cooler: Noctua NH-D9DX i4 3U 46.4 CFM CPU Cooler ($59.99 @ Newegg)
CPU Cooler: Noctua NH-D9DX i4 3U 46.4 CFM CPU Cooler ($59.99 @ Newegg)
Motherboard: ASRock EP2C602-4L/D16 SSI EEB Dual-CPU LGA2011 Motherboard ($307.99 @ SuperBiiz)
Memory: Kingston 64GB (4 x 16GB) Registered DDR3-1600 Memory ($304.99 @ SuperBiiz)
*Storage: Kingston SSDNow V300 Series 120GB 2.5" Solid State Drive ($41.33 @ OutletPC)
*Storage: Kingston SSDNow V300 Series 120GB 2.5" Solid State Drive ($41.33 @ OutletPC)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
Storage: Western Digital Red 2TB 3.5" 5400RPM Internal Hard Drive ($88.99 @ NCIX US)
*Power Supply: EVGA 850W 80+ Bronze Certified Semi-Modular ATX Power Supply ($69.99 @ Newegg)
*Other: Rosewill RSV-L4412 - 4U Rackmount Server Chassis, 12 SATA / SAS Hot-swap Drives ($199.99 @ Newegg)
*Other: LSI Internal SATA/SAS 9211-8i 6Gb/s PCI-Express 2.0 RAID Controller Card ($100.00 @ eBay)
Total: $2383.48


It seems that with this config, I can use passthru and provide the LSI controller and all 8 drives directly to the FreeNAS VM for 12TB of RaidZ2 storage.

Then use the other 4 WD Red drives for VMs while with the 2 SSDs mirrored to host the ESXi OS.

Ultimately I'm looking to build the following servers:
1) Windows 2012 R2 Server (Active Directory)
2) FreeNAS (Storage)
3) pfSense (Firewall/Router)
4) Plex Media Server
5+) Various other VMs to refresh and/or learn Windows Server/AD/Linux/etc training as needed.

Please remember, this is NOT a corporate environment. We are talking about a home server with 4-7 clients and 4-5 users + guest access every now and then.

FreeNAS storage is mainly for Media Storage, Workstation Backups, and User Home drives.

How much risk is there with running FreeNAS in this situation if I provide pass-thru to the LSI card dedicated to that VM alone. The parts marked with (*) are pre-owned parts. So my investment is ultimately looking at spending:

$65 for 2nd Xeon Processor
$300 for new MB
$300 for 64 GB ECC memory

OR

$250 -$270 for new MB with only one CPU socket for existing i3-4130 or e5-2670
$150 for 32 GB of ECC memory

So for ~$300 more I get MUCH more power
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,974

Gunndy

Dabbler
Joined
May 17, 2016
Messages
18
Might want to take a look at @joeschmuck recent thread on his esxi experience. I'm sure you'll find it valuable as it's very similar to what you are trying to do.


And much more power usage and heat. Only you can decide what's best for you.

I understand the power draw goes up compared to the smaller stand alone FreeNAS box. But considering the single ESXi server vs FreeNAS box + AD Domain Controller + pfSense box as 3 separate 24/7 systems....the power draw ends up being equal or less overall.

I'm more or less trying to understand what the downsides are to doing this with FreeNAS as long as I use pass-thru on the LSI controller to give it direct access to the HDDs so ZFS is happy.
 

Gunndy

Dabbler
Joined
May 17, 2016
Messages
18
Unless something has changed recently, the guides on building FreeNAS boxes specifically say do not use Kingston RAM.

I'll keep that in mind and go with the Crucial RAM options that are within $1 or so in price. Thanks.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I'm more or less trying to understand what the downsides are to doing this with FreeNAS as long as I use pass-thru on the LSI controller to give it direct access to the HDDs so ZFS is happy.

I got my system running on top of ESXi 6.0u2 atm. doing fine (edit: I'm still in testing phase - I've not committed my full pool of valuable data yet- but I've used my old config file and added a couple of empty drives to play around with for testing).
More or less all info needed is provided in the mentioned joeschmuck thread.
My own memo list/formula on how-to ESXi+FreeNAS. Just heads up, I'm no pro. The extremely shorted version:
  • Install FreeNAS on bare metal, export config. Then install ESXi, import config (that's how I migrated).
  • 10Gb bootdrive/VM. I used thin provisioning (did not research benefits or risks vs thick).
  • Have all intended expansion cards in the box when installing ESXi.
  • Dedicate as much RAM as possible. Find the VM-tab "resources" and <lock> set amounts of RAM to FreeNAS specifically (this is super important)
  • Leave spare CPU power locked for FreeNAS. Also on "resources tab". (this is a non-validated benefit)
  • Enable BIOS settings VT-d / other relevant settings to virtualization.
  • Passthrough HBA.
  • TEST the system, put it under high CPU/RAM load, pull drives, resilver.

Cheers,
 
Last edited:

Gunndy

Dabbler
Joined
May 17, 2016
Messages
18
I got my system running on top of ESXi 6.0u2 atm. doing fine (edit: I'm still in testing phase - I've not committed my full pool of valuable data yet- but I've used my old config file and added a couple of empty drives to play around with for testing).
More or less all info needed is provided in the mentioned joeschmuck thread.
My own memo list/formula on how-to ESXi+FreeNAS. Just heads up, I'm no pro. The extremely shorted version:
  • Install FreeNAS on bare metal, export config. Then install ESXi, import config (that's how I migrated).
  • 10Gb bootdrive/VM. I used thin provisioning (did not research benefits or risks vs thick).
  • Have all intended expansion cards in the box when installing ESXi.
  • Dedicate as much RAM as possible. Find the VM-tab "resources" and <lock> set amounts of RAM to FreeNAS specifically (this is super important)
  • Leave spare CPU power locked for FreeNAS. Also on "resources tab". (this is a non-validated benefit)
  • Enable BIOS settings VT-d / other relevant settings to virtualization.
  • Passthrough HBA.
  • TEST the system, put it under high CPU/RAM load, pull drives, resilver.

Cheers, Dice.

Thanks much. Been reading the thread and just finished up. Glad to see that things seem to be going well. I'm going to read some more of the linked pages, but I have a good feeling I'll be going this route as well with my rebuild of my FreeNAS box into a hosted solution.

With 12 HDD and 2 SSDs I figured I'd configure the drives as follows:

8 x 2TB RaidZ2 on the LSI 9211 controller dedicated to FreeNAS vm (passthru)
4 x 2TB Raid10 connected to MB SATA controller for ESXi DataStore for VMs
2 x 120GB Raid1 on MB SATA controller for ESXi OS (or maybe just use one of the 120GB ssd and find another use for the 2nd one in a friends system or something)
 

ChriZ

Patron
Joined
Mar 9, 2015
Messages
271
You could also consider skylake for the same build.
A capable 1151 Xeon should serve you well, supports 64GB of Ram, draws much less power and generates much less heat than the 2x2670s.
For example, an 1151 supermicro mobo with integrated SAS controller plus an E3-1240v5 should cost about the same as the two 2670s plus the mobo you posted and the LSI card.
Of course the E5 system is more powerful and has room for much more RAM...
 

Gunndy

Dabbler
Joined
May 17, 2016
Messages
18
You could also consider skylake for the same build.
A capable 1151 Xeon should serve you well, supports 64GB of Ram, draws much less power and generates much less heat than the 2x2670s.
For example, an 1151 supermicro mobo with integrated SAS controller plus an E3-1240v5 should cost about the same as the two 2670s plus the mobo you posted and the LSI card.
Of course the E5 system is more powerful and has room for much more RAM...

Bear in mind...I already have one Xeon E5-2670. So all I need is the MB and RAM.

Xeon E5-2670 Processor - $65
ASRock EP2C602-4L/D16 Motherboard - $399
Kingston 32GB (4x8) Registered DDR3-1600 Memory - $155

vs

Supermicro X9SRL-F Motherboard - $271
Kingston 32GB (4x8) Registered DDR3-1600 Memory - $155

So for $185 extra, I can get 2 x the CPU performance
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Thanks much. Been reading the thread and just finished up. Glad to see that things seem to be going well. I'm going to read some more of the linked pages, but I have a good feeling I'll be going this route as well with my rebuild of my FreeNAS box into a hosted solution.

With 12 HDD and 2 SSDs I figured I'd configure the drives as follows:

8 x 2TB RaidZ2 on the LSI 9211 controller dedicated to FreeNAS vm (passthru)
4 x 2TB Raid10 connected to MB SATA controller for ESXi DataStore for VMs
2 x 120GB Raid1 on MB SATA controller for ESXi OS (or maybe just use one of the 120GB ssd and find another use for the 2nd one in a friends system or something)

Just in case you don't know: VMware does not provide any kind of RAID or other redundant data storage drivers for the common, garden-variety SATA ports found on motherboards. So if you want a 4 x 2TB RAID10 array for an ESXi datastore, you will need a second RAID adapter supported by ESXi for this purpose, apart from the LSI 9211 you propose to pass through to FreeNAS. This is a perfectly viable design, but I haven't tried anything like it. I just use a pair of small SSD datastores so that I can mirror my FreeNAS installation and provide scratch space for ESXi. Then I share an NFS datastore back to ESXi from FreeNAS for my other VMs.

Common practice is to boot ESXi itself from a USB stick, a DOM, or a single, small SSD: I use a 16Gb Sandisk Cruzer Fit. It's easy to re-install ESXi if the need arises (provided you save off a copy of your configuration!) so a mirrored boot device isn't really necessary.

You might want to consider a design similar to mine: booting ESXi from a USB stick, using your 2 x 120GB disks as a local datastore for a mirrored installation of FreeNAS, and then putting all 12 of your 2TB drives in your FreeNAS pool - perhaps a pair of 6-disk RAIDZ2 vdevs (safer) or a set of mirrors (better performance).
 

Gunndy

Dabbler
Joined
May 17, 2016
Messages
18
Just in case you don't know: VMware does not provide any kind of RAID or other redundant data storage drivers for the common, garden-variety SATA ports found on motherboards. So if you want a 4 x 2TB RAID10 array for an ESXi datastore, you will need a second RAID adapter supported by ESXi for this purpose, apart from the LSI 9211 you propose to pass through to FreeNAS. This is a perfectly viable design, but I haven't tried anything like it. I just use a pair of small SSD datastores so that I can mirror my FreeNAS installation and provide scratch space for ESXi. Then I share an NFS datastore back to ESXi from FreeNAS for my other VMs.

Common practice is to boot ESXi itself from a USB stick, a DOM, or a single, small SSD: I use a 16Gb Sandisk Cruzer Fit. It's easy to re-install ESXi if the need arises (provided you save off a copy of your configuration!) so a mirrored boot device isn't really necessary.

You might want to consider a design similar to mine: booting ESXi from a USB stick, using your 2 x 120GB disks as a local datastore for a mirrored installation of FreeNAS, and then putting all 12 of your 2TB drives in your FreeNAS pool - perhaps a pair of 6-disk RAIDZ2 vdevs (safer) or a set of mirrors (better performance).

In order to provide all 12 2TB drives to FreeNAS, I'd have to buy/install another 9211-8i or 4i controller to passthru, correct? 8x2TB gives me 12TB or effective storage for now and the ability to replace the drives if needed to 4TB or 6TB drives for up to 36TB of NAS RaidZ2 space (more than I'll EVER need for home).

I was considering just using the on-board controller to do RAID10 of the 4 remaining drives for my HDD datastore and Raid0 of the 2 SSDs for an SSD datastore. I use ESXi all the time at work to build VMs, but I admit to being a newbie to the configuration and installation of the datastores, so I may be missing something or incorrect in my assumptions that I can use the onboard controller to set up my SSD and HDD Raids and present thost to ESXi for datastore creation.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
In order to provide all 12 2TB drives to FreeNAS, I'd have to buy/install another 9211-8i or 4i controller to passthru, correct? 8x2TB gives me 12TB or effective storage for now and the ability to replace the drives if needed to 4TB or 6TB drives for up to 36TB of NAS RaidZ2 space (more than I'll EVER need for home).
You're right! Without a backplane, you will need two HBAs to pass through to FreeNAS if you want to give it more than 8 disks. I was thinking you were using a system with a backplane... my bad!

I was considering just using the on-board controller to do RAID10 of the 4 remaining drives for my HDD datastore and Raid0 of the 2 SSDs for an SSD datastore. I use ESXi all the time at work to build VMs, but I admit to being a newbie to the configuration and installation of the datastores, so I may be missing something or incorrect in my assumptions that I can use the onboard controller to set up my SSD and HDD Raids and present thost to ESXi for datastore creation.
Yeah, I wanted to do that, too, when I first started building an all-in-one like this; but VMware doesn't have RAID drivers for mobo SATA ports. Sure, you can use local disks as datastores; you just can't configure them w/ any kind of redundancy. And it's not possible to pass individual SATA ports through to FreeNAS, so you can't divvy up some ports for data storage and others for FreeNAS.

It's a conundrum, no?

Have you considered getting used gear off ebay? Something like this ought to work well with your 2TB drives: 24 bay chassis, SAS1 backplane, LSI 2008 HBA in IT mode built-in on X8DT6 motherboard, 48GB ECC RAM, 2 x Intel Xeon E5645 CPUs:

http://www.ebay.com/itm/131821625649?_trksid=p2055119.m1438.l2649&ssPageName=STRK:MEBIDX:IT
 

JJT211

Patron
Joined
Jul 4, 2014
Messages
323
Unless something has changed recently, the guides on building FreeNAS boxes specifically say do not use Kingston RAM.

Mehh, whtever. I bought some Kingston RAMM and its been working great so far
 

Gunndy

Dabbler
Joined
May 17, 2016
Messages
18
You're right! Without a backplane, you will need two HBAs to pass through to FreeNAS if you want to give it more than 8 disks. I was thinking you were using a system with a backplane... my bad!


Yeah, I wanted to do that, too, when I first started building an all-in-one like this; but VMware doesn't have RAID drivers for mobo SATA ports. Sure, you can use local disks as datastores; you just can't configure them w/ any kind of redundancy. And it's not possible to pass individual SATA ports through to FreeNAS, so you can't divvy up some ports for data storage and others for FreeNAS.

It's a conundrum, no?

Have you considered getting used gear off ebay? Something like this ought to work well with your 2TB drives: 24 bay chassis, SAS1 backplane, LSI 2008 HBA in IT mode built-in on X8DT6 motherboard, 48GB ECC RAM, 2 x Intel Xeon E5645 CPUs:

http://www.ebay.com/itm/131821625649?_trksid=p2055119.m1438.l2649&ssPageName=STRK:MEBIDX:IT

I looked around for something like that originally, but kept running into situations where anything "affordable" was limited to 2TB drives on the SAS expanders. Plus I already had an i3-4130 CPU and Z79 Mobo laying around with 16 GB of 1600 DDR3 memory from old systems. So I grabbed a used 9211-8i off ebay for $100 and a Rosewill 4211 12 Bay chassis and built my current FreeNAS box.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I just use a pair of small SSD datastores so that I can mirror my FreeNAS installation and provide scratch space for ESXi. Then I share an NFS datastore back to ESXi from FreeNAS for my other VMs.
How do you setup the bootorder to make this work? (I read somewhere a grinch comment about boot strap paradoxes)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Mehh, whtever. I bought some Kingston RAMM and its been working great so far

Yeah, we have some members that are still grinding axes over what was admittedly a huge clusterfsck by Kingston. There's no solid reason to assume they're any better/worse than any of the other cheap-tier manufacturers except that they actually got hit with some bad luck and it bit them. This is like the people who swear they'll never buy another {Seagate, Western Digital} hard drive because they've had problems in the past.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
How do you setup the bootorder to make this work? (I read somewhere a grinch comment about boot strap paradoxes)

Well that there's the trick, now, isn't it. Basically your hypervisor will grind to a halt during boot while it tries to do the NFS mount, eventually fail after minutes pass, and then move on to booting without that datastore. This implies that even if you've configured the VM's to power on after restart, they won't, because what'd need to happen is for "FreeNAS-VM" to start, ESXi to retry the NFS mount, and then your other VM's to start.

Apparently there's some internal API used by some of the new "software defined storage" devices that actually makes that a theoretical possibility, but the closest I've heard of anyone doing this successfully is having a script on the ESXi host do the datastore mount post-boot and then proceed to power on the dependent VM's. That's totally doable but a totally disgusting hack.

Otherwise, the traditional answer is "don't set my VM's to automatically boot" and if that's acceptable in your environment, obviously that's just dandy and it'll "work" (but only because we've redefined "work" to mean "just not be able to reboot without manual intervention.")
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Otherwise, the traditional answer is "don't set my VM's to automatically boot" and if that's acceptable in your environment, obviously that's just dandy and it'll "work" (but only because we've redefined "work" to mean "just not be able to reboot without manual intervention.")

I'll probably give this a shot.
IIRC, one half-decent setup is to have FreeNAS boot off the ESXi host's installation SSD/storage area.
Once the VM-FreeNAS is running, hook up iSCSI block to become a 'storage area' for the host?

Cheers,
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I'll probably give this a shot.
IIRC, one half-decent setup is to have FreeNAS boot off the ESXi host's installation SSD/storage area.

That's really the only possible setup unless you have other shared storage.

Once the VM-FreeNAS is running, hook up iSCSI block to become a 'storage area' for the host?

Cheers, Dice

Same problem of course.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Same problem of course.
Mah, not really ?
The trade-off of my suggested solution would be to accept that FreeNAS is not installed on the same Storage area as the remaining VM's on the same machine.
Comparing to running hardware raid solutions, sticking with FreeNAS, I can see benefits like ease of administration in terms of adding space or IOPS through additional vdevs to the pool below the zvol/iSCSI.
When aiming to run only a single box for all purposes, that seems to be a legit trade.

Cheers /
 
Last edited:
Status
Not open for further replies.
Top