Considering running under VMware ESXi

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
I'm in the process of putting together my first FreeNAS build, and after doing some reading I'm now considering running under ESXi. This is primarily because I may want to run a Windows VM for XProtect (Security Camera Software), but it may also prove useful to have some desktop VMs for other purposes.

In reading the docs and various posts about ESXi, I think I need to purchase a HBA, but I'm still a little fuzzy on the details. Here's my current hardware:

Intel® Xeon® E-2136 Processor
Supermicro X11SCL-F Server Board
Crucial 16GB DDR4-2666 ECC UDIMM CT16G4WFD8266
Kingston AS400SSD 120G x 1 (boot)
Western Digital Red 4 TB 3.5" 5400RPM x 5 (pool)
Seasonic 650W Focus Plus
Fractal Design Define R6


I'm thinking if I buy the following HBA I can use PCI passthrough to give FreeNAS direct access to all the drives (1xSDD for boot, and 5xHDD for pool).

https://www.ebay.com.au/itm/LSI-6Gb...ode-ZFS-FreeNAS-unRAID-AU-seller/143058624384

I'd need to also buy another SSD to install ESXi which I could connect to one of the mobo's SATA ports.

Does this sound like I'm on the right track? Anything else about the above hardware that would be a problem for this setup?

Thanks for your advice.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
You're on the right track as far as passing the HBA through, and using the motherboard SATA to boot ESXi from an SSD.

You'll want more RAM though. 16GB will vanish fast, with 8GB going to FreeNAS and allocating a bit for the hypervisor itself. 32GB might be a better starting point.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
@HoneyBadger - Sorry, my bad, I have 2 of those for a total of 32GB. I would love to upgrade to 64GB when I can afford it.

@Pheran - I was initially thinking I'd do that, but I've read a few comments about the VM support in FreeNAS not being very reliable yet. Those comments may be out of date now though? I tried to install Windows under bhyve in my test FreeNAS system, but didn't have any luck. I got some cryptic error message about the machine not being configured, but I think that's probably due to running my test system under VirtualBox. ESXi does sound pretty sweet though, so I thought I'd give it a go if I could.

I've skimmed through the very detailed post about ESXi setup by @Stux here (will read in more detail when my hardware arrives). But there's a couple of things I'm a little confused by:

1. The info I've seen says your FreeNAS boot drive can not be used for anything else. In the case of ESXi running off an SSD connected directly to the motherboard, am I right in understanding you can create a virtual disk on that SSD and install FreeNAS on that? Wouldn't this mean that you lose the benefits of ZFS for that boot drive? (which don't work unless the disks are exposed directly?). Or do you install FreeNAS on a drive connected to the PCI-Passed-Through HBA somehow?

2. The ixsystems blog post here says something I don't understand:

1570272424440.png


My plan has been to do a single 5-disk RaidZ2 vdev. I thought I understood how disk configuration worked from reading the docs, but the statement above seems to contradict everything I've read. Doesn't a pool with a single RaidZ2 vdev keep your data just as safe as a pool with multiple RaidZ2 vdevs?

Thanks for your help!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
1. While you can install FreeNAS on a drive that's connected to the HBA you pass through, you still need at least the VMX (VM definition file) stored outside of there so that ESXi knows it has to pass the HBA through. Regarding "losing the benefits of ZFS" for the boot drive, that's not nearly as crucial as the data. Provided that you back up your configuration frequently, you can recover from a failed boot device in just a few moments (at least from the perspective of FreeNAS - your ESXi install may take a little more convincing)

2. This is regarding the ZFS metadata specifically; even in a single vdev it will inherit the redundancy of that vdev, be that mirror or RAIDZ, but the metadata is so important that it also gets mirrored across vdevs to protect it further. But losing a vdev entirely still typically means "pool is toast."

As far as hypervisors go, VMware is definitely the most mature option; bhyve is good to have but I wouldn't consider it anywhere near the same quality (yet). And 32GB should be fine, especially if you're getting 2x16GB which leaves you room to grow.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
1. Ok, that makes sense. I think probably simpler to just install FreeNAS on the ESXi drive. (I was planning on installing it on a single small/cheap Kingston SSD anyway). I've ordered a 1TB evo for the ESXi drive, and that leaves the Kingston free to use for something else. From what I understand I probably won't have much need for an slog or an l2arc (plex media server, family photos & security camera storage), but I'm not sure if ESXi changes that at all. I will likely run a couple of Windows VMs for work stuff if all goes well.

2. I see, so there's some case where you can perhaps recover some data from your pool after a vdev failure, as long as all your metadata is safe (mirrored on other vdevs)? It sounds like this is somewhat of an edge case, and for my purposes a single RaidZ2 5-disk vdev would be ok - am I understanding that correctly?

Thanks again for your advice.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
1. You're correct about L2ARC/SLOG not being as valuable for general media. It won't be an ideal setup to put VMs on the ZFS setup, but if you do then SLOG might come into play.

2. Generally no. Loss of a vdev usually means the whole pool is unavailable. It's moreso in a scenario if you lose redundancy on a vdev (such as in mirrors) then you don't trash the entire pool by having a read error in the metadata on a degraded vdev. I wouldn't sweat it in your scenario as much.

Just remember that RAID/vdev redundancy is not a backup. Make sure to keep those family photos safely backed up elsewhere as well.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
I'm thinking the VMs will just sit outside ZFS on the SSD that ESXi is installed on. I don't anticipate caring too much if I lose them due to drive failure (and I'm hoping ESXi lets you share the boot drive with your VMs).

Planning on setting up a backblaze backup once I'm up and running. Been relying on a single-disk QNAP for 10 years now... (with external usb drive backup), so it's time to step up my game!

Thanks for your help @HoneyBadger
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
On one of my IBM x3650m3 I have ESXi and freenas running off the ESXi boot drive. the other one I have freenas on it's own ssd passed through from PROXMOX to the vm. Both have the HBA passed through via PCI_passthrough
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm hoping ESXi lets you share the boot drive with your VMs
As long as you are booting from a non-USB device it will automatically claim the remaining disk space as a VMFS partition. Might want to upgrade it to VMFS6 so that it can benefit from disk space reclamation.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
@blueether - thanks, glad to know it works. I also hadn't heard about Proxmox - how would you rate it against ESXi?

@HoneyBadger - I installed ESXi in Virtualbox yesterday and had a quick play around with it. Looks great! I tried to install Windows, but it didn't work (the whole VM in a VM thing). I'll have to get my head around the way it handles storage etc, but the UI seems great. Looking forward to getting my hardware so I can give it a real go.
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
I started to look for other hypervisers other than ESXi when my 60 day trial was getting close to an end as I have 12 cores (24 vcpu's) and it seemed harsh to limit a VM to just 8 vcpu's.

PROXMOX seems very capable although there is less user created content for finding things out, it has a few little niggles and some things are less intuitive than ESXi but in saying that ESXi has it's problems and I managed to hang it several times so is less than perfect. iSCSI never seems to mount after a reboot on ESXi for me but I've never had an issue in PROXMOX. Another plus for PROXMOX is it's opensource and free. I've not spun up a windows install under it yet so cant rate it on that but I have FreeBSD, FreeNAS x2, pfsense and a linux install or two all happy in PROXMOX. Another plus for PROXMOX is the GUI seems quicker than ESXi even though it is on the IBM with slower CPUs, lower core count and 1/2 the RAM - it will bet moved to the better hardware in the next week or two.

I'd spin up a vm of PROXMOX and test it out and see if you like it
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
One option is to boot ESXi from a USB drive and use your SSD as a host for VMs. Setup passthru for the HDDs and you should be in good shape. Ideally you could get a bigger or more SSDs if you do a bunch of VMs.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
it seemed harsh to limit a VM to just 8 vcpu's
@blueether I've just been looking at the free version's limitations and trying to get my head around this vCPU configuration. The explanation here is rather confusing. It goes to some lengths to explain how each vCPU only appears to the guest OS as having a single core, but then talks about a setting that lets you define the number of cores for each CPU. Although I imagine it's not really going to be a problem for me, my CPU is 6-core 12-threads, and I'm just wondering if the 8-vCPU limit of the free version will limit me at all. Can I configure a VM to have 6 vCPUs, each with 2 cores, and have that map to my 12 threads?

@joeinaz - I've bought a 1TB SSD in addition to the stuff up top for the purpose of ESXi. I was under the impression I can use it as both the ESXi boot and the storage for VMs. Am I misunderstanding something there, or do you think it's better to keep the boot separate? I do have the cheap Kingston 120Gb SSD I have no use for at the moment. Perhaps I should connect both the SSDs directly to the mobo sata and use the small one for ESXi install/boot and the 1TB just as a VM store? (and passthrough the HBA with all the spinning disks for the FreeNAS VM).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You should definitely check out

https://www.ixsystems.com/community...ide-to-not-completely-losing-your-data.12714/

All this talk of random hypervisors means that you'll be the guinea pig for one of those random hypervisors, and virtualizing FreeNAS is really a situation where the engineering has to be 100% correct.

I started to look for other hypervisers other than ESXi when my 60 day trial was getting close to an end as I have 12 cores (24 vcpu's) and it seemed harsh to limit a VM to just 8 vcpu's.

With VM's, you should not be setting vCPU any higher than you actually need. First, be aware that with the advent of Spectre/Meltdown/L1TF, some of the mitigations adversely affect CPU/threads, and stuff like the SCA Scheduler will create some real complexity for performance optimization.

Next, almost no VM should have 8 vCPU's unless you have a massive number of cores and also an actual need for that much CPU. A VM cannot be scheduled to run unless all requested CPU resources can actually be allocated. Depending on your configuration, your scheduler may not be considering your CPU to be a "24 vCPU" host, and you may be artificially capping your performance by asking for 8 vCPU's on a 12 CPU host. So if you ask for 8+ vCPU's, you can create a situation where the scheduling for that VM is actually more infrequent, and your performance drops.

There's some other bad ideas/not-advisable info in this thread but I just don't have the time right now.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
CPU scheduling in hypervisors is complex to say the least, we haven't even broached the subject of NUMA nodes and preference for physical vs. logical as related to "cores per socket" assigned to a VM.

Suffice it to say the 8 vCPU limit in the free ESXi edition is rarely a limitation for home users.
 

joeinaz

Contributor
Joined
Mar 17, 2016
Messages
188
Three things to add here:

1. Don't forget about hypervisor overhead when calculating available system resources.
2. In the BIOS/uEFI, make sure all of the VT-x extensions are set.
3. if you can, while in the BIOS/uEFI set the p and c-states to be optimized for virtualization.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
@jgreco Thanks for the link, that was a big read with lots of great info. I think it sounds like I'm on the right track from a hardware point of view (Supermicro mb, Xeon CPU, ECC ram, dedicated HBA etc). From the software side, I'll do a USB install of FreeNAS first, create my pool and give it all a good test. Then install ESXi on an mobo-sata SSD and create a new VM install of FreeNAS with passed-thru HBA, then import the config from the USB install. I'll probably run it like this for a couple of months before moving any important data to it.

I need to do more reading about the FreeNAS config backup/restore - I'm a little confused by it. I understand there's a local db that holds the config, and that you should back that up. However, I've seen people talk about "importing the pool" and that appears to be a separate operation to "restore the config". I'm hoping this means the pool data can actually be read correctly without a backed up config, and that the config just helps you with all the other FreeNAS configuration (e.g. users, jails etc).

In that thread your advice was "Have a formalized system for storing the current configuration automatically, preferably to the pool.", which makes me think you must be able to import a pool and read data from it before restoring your configuration. Does FreeNAS automatically detect the pool's setup (vdevs, raid type etc) just by looking at the disks it's connected to?

(BTW, the "Please do not run FreeNAS in production as a Virtual Machine!" link at the top of that thread just takes me to what appears to be some kind of admin login page)

Suffice it to say the 8 vCPU limit in the free ESXi edition is rarely a limitation for home users.
@HoneyBadger Sounds like I need to 'start small' and only assign the CPU resources I think I'll need in any VM. I'm not really sure yet how much ram to assign to FreeNAS of my 32GB. From what I've read, our use case (mostly media streaming and photo storage) wouldn't get a whole lot of benefit from a bigger read cache. Somewhere between 8 and 16 perhaps.

1. Don't forget about hypervisor overhead when calculating available system resources.
2. In the BIOS/uEFI, make sure all of the VT-x extensions are set.
3. if you can, while in the BIOS/uEFI set the p and c-states to be optimized for virtualization.
@joeinaz Excellent points, thanks - I'll have a dig around in the UEFI config once I've got it up and running. I've never used IPMI before, so that's my first hurdle to work out (it looks like it has a dedicated network port, and hopefully it just uses DHCP). I imagine I can then just use a browser to access the UEFI config at startup. One thing I'm not sure about - am I right in understanding that the HBA cards have their own boot-time config page? (where you can see various info about their driver version etc?)

Thanks again to everyone for the help. Most of my hardware has arrived so I just have to find time to build the thing now.
 

ChrisNAS

Explorer
Joined
Apr 14, 2017
Messages
71
Been down this road. After the mess with FN 11.1->2 and jacked up jails, I am no longer interested in FN doing anything other than what it does best... manage storage. I built a system with similar hardware where FN is running as a VM on ESXi with the HBA passed through. Going on 2 years and its been running great. ESXi is very nice. Very polished. Very easy to work with. I'm currently building a second system which is almost done... and it's going to be just like the first, but much much beefier for higher demand.
 

zeebee

Explorer
Joined
Sep 18, 2019
Messages
50
That's good to hear @ChrisNAS. I had a bit of a play with jails initially, but the more I read the more I thought ESXi was worth giving a go. The only downside may be the resource allocation thing, but I'm hoping it won't be an issue. If FreeNAS is doing nothing but storage + cloud backups, how many vCPUs would you give it?

My hardware has finally finished arriving today, so I've put the mobo/cpu/ram together. Got IPMI working without any trouble (except it uses java, ugh! installed in windows sandbox for now). I'm running memtest on it and no problems so far. Next step will be to do a USB FreeNAS install and read the disk burn in thread!

Edit: Just found the HTML5 link in IPMI. Bit clunky, but I'll call that a win!
 
Last edited:
Top