jgreco
Resident Grinch
- Joined
- May 29, 2011
- Messages
- 18,680
[---- 2018/02/27: This is still as relevant as ever. As PCIe-Passthru has matured, fewer problems are reported. I've updated some specific things known to be problematic ----]
[---- 2014/12/24: Note, there is another post discussing how to deploy a small FreeNAS VM instance for basic file sharing (small office, documents, scratch space). THIS post is aimed at people wanting to use FreeNAS to manage lots of storage space. ----]
You need to read "Please do not run FreeNAS in production as a Virtual Machine!" ... and then not read the remainder of this. You will be saner and safer for having stopped.
<the rest of this is intended as a starting point to be filled in further>
But there are some of you who insist on blindly charging forward. I'm among you, and there are others. So here's how you can successfully virtualize FreeNAS, less-dangerously, with a primary emphasis on being able to recover your data when something inevitably fscks up. And remember, something will inevitably fsck up, and then you have to figure out how to recover. Best to have thought about it ahead of time.
Now at this point, if ESXi were to blow up, you can still bring the FreeNAS back online with a USB key of FreeNAS, and a copy of your configuration. This is really the point I'm trying to make: this should be THE most important quality you look for in a virtualized FreeNAS, the ability to just stick in a USB key and get on with it all if there's a virtualization issue. Your data is still there, in a form that could easily be moved to another machine if need be, without any major complicating factors.
But, some warnings:
[---- 2014/12/24: Note, there is another post discussing how to deploy a small FreeNAS VM instance for basic file sharing (small office, documents, scratch space). THIS post is aimed at people wanting to use FreeNAS to manage lots of storage space. ----]
You need to read "Please do not run FreeNAS in production as a Virtual Machine!" ... and then not read the remainder of this. You will be saner and safer for having stopped.
<the rest of this is intended as a starting point to be filled in further>
But there are some of you who insist on blindly charging forward. I'm among you, and there are others. So here's how you can successfully virtualize FreeNAS, less-dangerously, with a primary emphasis on being able to recover your data when something inevitably fscks up. And remember, something will inevitably fsck up, and then you have to figure out how to recover. Best to have thought about it ahead of time.
- Pick a virtualization platform that is suitable to the task. You want a bare metal, or "Type 1," hypervisor. Things like VirtualBox, VMware Fusion, VMware Workstation, etc. are not acceptable.
VMware ESXi is suitable to the task.
Hyper-V is not suitable for the task, as it is incompatible with FreeBSD at this time.
I am not aware of specific issues that would prevent Xen from being suitable. There is some debate as to the suitability of KVM. You are in uncharted waters if you use these products.
- Pick a server platform with specific support for hardware virtualization with PCI-Passthrough. Most of Intel's Xeon family supports VT-d, and generally users have had good success with most recent Intel and Supermicro server grade boards. Other boards may claim to support PCI-Passthrough, but quite frankly it is an esoteric feature and the likelihood that a consumer or prosumer board manufacturer will have spent significant time on the feature is questionable. Pick a manufacturer whose support people don't think "server" means the guy who brings your food at the restaurant.
You will actually want to carefully research compatibility prior to making a decision and prior to making a purchase. Once you've purchased a marginal board, you can spend a lot of time and effort trying to figure out the gremlins. This is not fun or productive. Pay particular attention to the reports of success or failure that other ESXi users have had with VT-d on your board of choice. Google is your friend.
Older boards utilizing Supermicro X8* or Intel 5500/5600 CPU's and prior are expected to have significant issues, some of which are fairly intermittent, and may not bite you for weeks or months. All of the boards that have been part of the forum recommended hardware series seem to work very well for virtualization.
- Do NOT use VMware Raw Device Mapping. This is the crazy train to numerous problems and issues. You will reasonably expect that this ought to be a straightforward, sensible solution, but it isn't. The forums have seen too many users crying over their shattered and irretrievable bits. And yes, I know it "works great for you," which seems to be the way it goes for everyone until a mapping goes wrong somehow and the house of cards falls. Along the way, you've probably lost the ability to monitor SMART and other drive health indicators as well, so you may not see the iceberg dead ahead.
- DO use PCI-Passthrough for a decent SATA controller or HBA. We've used PCI-Passthrough with the onboard SAS/SATA controllers on mainboards, and as another option, LSI controllers usually pass through fine. Get a nice M1015 in IT mode if need be. Note that you may need to twiddle with setting hw.pci.enable_msi/msix to make interrupt storms stop. Some PCH AHCI's ("onboard SATA") and SCU's ("onboard SAS/SATA") work. Tylersburg does not work reliably. I've seen Patsburg and Cougar Point work fine on at least some Supermicro boards, but had reports of trouble with the ASUS board. The Ivy Bridge CPU era is the approximate tipping point where things went from "lots of stuff does not to work" and began to favor "likely to work."
- Try to pick a board with em-based network interfaces. While not strictly necessary, the capability to have the same interfaces for both virtual and bare metal installs makes recovery easier. Much easier.
Now at this point, if ESXi were to blow up, you can still bring the FreeNAS back online with a USB key of FreeNAS, and a copy of your configuration. This is really the point I'm trying to make: this should be THE most important quality you look for in a virtualized FreeNAS, the ability to just stick in a USB key and get on with it all if there's a virtualization issue. Your data is still there, in a form that could easily be moved to another machine if need be, without any major complicating factors.
But, some warnings:
- Test, test, and then test some more. Do not assume that "it saw my disks on a PCI-Passthru'd controller" is sufficient proof that your PCI-Passthrough is sufficient and stable. We often test even stuff we expect to work fine for weeks or months prior to releasing it for production.
- As tempting as it is to under-resource FreeNAS, do try to aggressively allocate resources to FreeNAS, both memory and CPU.
- Make sure your virtualization environment has reserved resources, specifically including all memory, for FreeNAS. There is absolutely no value to allowing your virtualization environment to swap the FreeNAS VM.
- Do not try to have the virtualization host mount the FreeNAS-in-a-VM for "extra VM storage". This won't work, or at least it won't work well, because when the virtualization host is booting, it most likely wants to mount all its datastores before it begins launching VM's. You could have it serve up VM's to other virtualization hosts, though, as long as you understand the dependencies. (This disappoints me too.)
--update-- ESXi 5.5 appears to support rudimentary tiered dependencies, meaning you should be able to get ESXi to boot a FreeNAS VM first.
Due to lack of time I have not tried this. If you do, report back how well (or if) it works.
- Test all the same things, like drive replacement and resilvering, that you would for a bare metal FreeNAS implementation.
- Have a formalized system for storing the current configuration automatically, preferably to the pool. Several forum members have offered scripts of varying complexity for this sort of thing. This makes restoration of service substantially easier.
- Since you lack a USB drive key, strongly consider having a second VM and 4GB disk configured and ready to go for upgrades and the like. It is completely awesome to be able to shut down one VM and bring up another a few moments later and restore service at the speed of an SSD datastore.
Last edited: