ChrisReeve
Explorer
- Joined
- Feb 21, 2019
- Messages
- 91
I wanted to do an update here, with my experiences/troubles with virtualizing freeNAS.
First of all, I upgraded to the following specs:
MB: Supermicro X9DRL-3F
CPU: 2x Intel E5-2650 v2 (16C/32T total)
RAM: 128GB 1600MHz DDR3 ECC
OS Disk (ESXi + VMs, including freeNAS): Samsung 850 EVO 250GB
Main pool: 10x10TB WD Red (White labels, shucked from WD External drives). (8 drives connected via on-boad SAS-controller, 2 via LSI 9211-8i, both passed through to freeNAS)
Cache-card: Intel DC P3700 400GB (hw-passthrough to freeNAS, with 20GB SLOG, 128GB L2ARC, 16GB Swap) (NOT IN USE! See below)
NIC: Intel X540-T2 dual 10GbE
First time setup took a lot of time, but was eventually successfull. I didnt have to do anything to be able to pass through the on-boad SAS-controller, or the SATA-controller, but I left the SATA controller for my ESXi drives. This left me with (only) 8 drives connected via the MB. The last two had to go through my HBA, which also showed up as available for passthrough on first boot. I also passed through my Intel DC P3700 400GB, partitioned to two partitions (256GB L2ARC, 20GB SLOG, rest for OP).
Basically, no SSH-ing needed. I was also able to set up NFS share and iSCSI, which showed up in ESXi, and I installed a few VMs on it.
Then I started getting the following error in freeNAS about once every 60 seconds (on the VM command line itself, no errors appeared in the web GUI):
This didnt affect performance or anything else, bot freeNAS, and VMs running off it worked fine, when I ran CrystalDiskMark on a Windows 10 VM installed on the iSCSI partition, it reported close to 1,5GB/s sequential reads and writes (1GB test file), but when I tried to shut down all the VMs, then I was unable to do a soft power off of freeNAS. Had to force it to power off.
I force rebooted freeNAS in ESXi, and I got it to boot, with several errors, and the pool showed up as healthy, but with 0 bytes available, which worried me. I shut it off again, rebooted my server, and barebone booted into freeNAS (thanks again to the guide and mirror with USB key), and the pool showed up fine. I detatched both my SLOG and L2ARC, shut down, rebooted into ESXi, and started freeNAS as a VM again, removing the hw passthrough of my Intel DC P3700. Now things worked again.
I have been able to reproduce this error (running ESXi 6.7), and I have tried the following fix without success: https://redmine.ixsystems.com/issues/26508
I decided that even though it is cool to be able to have high performance VMs running off my pool, it isn't necessary for my use case, and I dont even gain any performance for my use case with L2ARC/SLOG, besides from running VMs off it. I decided to give up (for now) on running the DC P3700 in virtualized freeNAS, and just run VMs from SSDs directly.
Because I am new to this, and because I had screwed around in freeNAS, I decided to delete it, and start fresh. I installed freeNAS 11.3RC, without DC P3700 this time, and it has been running flawlessly since then. High performance, and no errors (allthough I have just been running it for a few days).
With this solution, I dont have to hassle around with scripts to start/stop/ VMs, and refresh iSCSI mounts etc. Just keeping it simple for my use.
I wasn't able to run plex inside virtualized freeNAS. Unable to install as a plugin (possibly known bug). I passed through a physical NIC (one of the onboard 1GbE), and was able to install the plugin successfully. Plex went online, and was able to mount my media folder, and scan it, but for some reason, it was unable to playback anything at all. I guess I give up fast, and wanted a few more features, so I decided to run plex either in Ubuntu, or in Windows 10. I tried around in Ubuntu, but was halted by something as easy as being able to add media on an SMB drive, so I gave up, made a new Windows 10 VM, just for plex, which has been working flawlessly. I know this is a pretty inefficient way of running Plex, but at least it is very easy, and for now, stable.
PS: I also decided to turn Hyper Threading OFF. The limitations in ESXi free licence of 8 vCPUs per VM, effectively limits me to 8 threads (i.e. 4 CORES) with HT turned on. Turning off HT gave me the possibility to dedicate 8 physical cores to a single VM.
This leaves me with the following setup:
ESXi 6.7 with the following VMs:
freeNAS: 4 cores, 90GB RAM
Win10Plex: 8 cores, 12GB RAM
Win10Torrenting:8 cores, 6GB RAM (high speed VPN using UDP protocol eats alot of resources)
Ubuntu 18.04 (TeamSpeak server, and other servers throuh LinuxGSM on demand): 4 cores, 4GB RAM (for now).
Everything works, is stable, and high performant.
Also: I tried to install OS X High Sierra, but was unable. Did the unlocker thing, rebooted, and was unable to boot any VMs. Uninstalled unlocker 208, installed unlocker 209Unofficial (cant find 209 Release Candidate), with the same issue (Error 45 I believe). Uninstalled, and VMs start again. Was able to make the VM, and MacOS showed up in ESXi, and it boots, but the VM is stuck in an infinite loop during boot. I see the loading bar filling up, but when it gets full, it starts over again. Any way to solve this? KEEP IN MIND, this was with unlocker UNINSTALLED, since I am unable to boot ANY VM with it installed.
But all in all, I am very happy that I followed your guide, and virutalized freeNAS!
PS: I dont know why, but freeNAS performs BETTER when virtualized. I wasnt able to reach 1GB/s read speeds when running barebone, but I am now. With the exact same config, and same SMB optimizations in tunables. I dont know why, but running barebone, I am stuck at around 700-800MB/s. Now I see stable 1.00 to 1.10 GB/s when reading from ARC.
First of all, I upgraded to the following specs:
MB: Supermicro X9DRL-3F
CPU: 2x Intel E5-2650 v2 (16C/32T total)
RAM: 128GB 1600MHz DDR3 ECC
OS Disk (ESXi + VMs, including freeNAS): Samsung 850 EVO 250GB
Main pool: 10x10TB WD Red (White labels, shucked from WD External drives). (8 drives connected via on-boad SAS-controller, 2 via LSI 9211-8i, both passed through to freeNAS)
Cache-card: Intel DC P3700 400GB (hw-passthrough to freeNAS, with 20GB SLOG, 128GB L2ARC, 16GB Swap) (NOT IN USE! See below)
NIC: Intel X540-T2 dual 10GbE
First time setup took a lot of time, but was eventually successfull. I didnt have to do anything to be able to pass through the on-boad SAS-controller, or the SATA-controller, but I left the SATA controller for my ESXi drives. This left me with (only) 8 drives connected via the MB. The last two had to go through my HBA, which also showed up as available for passthrough on first boot. I also passed through my Intel DC P3700 400GB, partitioned to two partitions (256GB L2ARC, 20GB SLOG, rest for OP).
Basically, no SSH-ing needed. I was also able to set up NFS share and iSCSI, which showed up in ESXi, and I installed a few VMs on it.
Then I started getting the following error in freeNAS about once every 60 seconds (on the VM command line itself, no errors appeared in the web GUI):
Code:
nvme0: Missing interrupt
This didnt affect performance or anything else, bot freeNAS, and VMs running off it worked fine, when I ran CrystalDiskMark on a Windows 10 VM installed on the iSCSI partition, it reported close to 1,5GB/s sequential reads and writes (1GB test file), but when I tried to shut down all the VMs, then I was unable to do a soft power off of freeNAS. Had to force it to power off.
I force rebooted freeNAS in ESXi, and I got it to boot, with several errors, and the pool showed up as healthy, but with 0 bytes available, which worried me. I shut it off again, rebooted my server, and barebone booted into freeNAS (thanks again to the guide and mirror with USB key), and the pool showed up fine. I detatched both my SLOG and L2ARC, shut down, rebooted into ESXi, and started freeNAS as a VM again, removing the hw passthrough of my Intel DC P3700. Now things worked again.
I have been able to reproduce this error (running ESXi 6.7), and I have tried the following fix without success: https://redmine.ixsystems.com/issues/26508
I decided that even though it is cool to be able to have high performance VMs running off my pool, it isn't necessary for my use case, and I dont even gain any performance for my use case with L2ARC/SLOG, besides from running VMs off it. I decided to give up (for now) on running the DC P3700 in virtualized freeNAS, and just run VMs from SSDs directly.
Because I am new to this, and because I had screwed around in freeNAS, I decided to delete it, and start fresh. I installed freeNAS 11.3RC, without DC P3700 this time, and it has been running flawlessly since then. High performance, and no errors (allthough I have just been running it for a few days).
With this solution, I dont have to hassle around with scripts to start/stop/ VMs, and refresh iSCSI mounts etc. Just keeping it simple for my use.
I wasn't able to run plex inside virtualized freeNAS. Unable to install as a plugin (possibly known bug). I passed through a physical NIC (one of the onboard 1GbE), and was able to install the plugin successfully. Plex went online, and was able to mount my media folder, and scan it, but for some reason, it was unable to playback anything at all. I guess I give up fast, and wanted a few more features, so I decided to run plex either in Ubuntu, or in Windows 10. I tried around in Ubuntu, but was halted by something as easy as being able to add media on an SMB drive, so I gave up, made a new Windows 10 VM, just for plex, which has been working flawlessly. I know this is a pretty inefficient way of running Plex, but at least it is very easy, and for now, stable.
PS: I also decided to turn Hyper Threading OFF. The limitations in ESXi free licence of 8 vCPUs per VM, effectively limits me to 8 threads (i.e. 4 CORES) with HT turned on. Turning off HT gave me the possibility to dedicate 8 physical cores to a single VM.
This leaves me with the following setup:
ESXi 6.7 with the following VMs:
freeNAS: 4 cores, 90GB RAM
Win10Plex: 8 cores, 12GB RAM
Win10Torrenting:8 cores, 6GB RAM (high speed VPN using UDP protocol eats alot of resources)
Ubuntu 18.04 (TeamSpeak server, and other servers throuh LinuxGSM on demand): 4 cores, 4GB RAM (for now).
Everything works, is stable, and high performant.
Also: I tried to install OS X High Sierra, but was unable. Did the unlocker thing, rebooted, and was unable to boot any VMs. Uninstalled unlocker 208, installed unlocker 209Unofficial (cant find 209 Release Candidate), with the same issue (Error 45 I believe). Uninstalled, and VMs start again. Was able to make the VM, and MacOS showed up in ESXi, and it boots, but the VM is stuck in an infinite loop during boot. I see the loading bar filling up, but when it gets full, it starts over again. Any way to solve this? KEEP IN MIND, this was with unlocker UNINSTALLED, since I am unable to boot ANY VM with it installed.
But all in all, I am very happy that I followed your guide, and virutalized freeNAS!
PS: I dont know why, but freeNAS performs BETTER when virtualized. I wasnt able to reach 1GB/s read speeds when running barebone, but I am now. With the exact same config, and same SMB optimizations in tunables. I dont know why, but running barebone, I am stuck at around 700-800MB/s. Now I see stable 1.00 to 1.10 GB/s when reading from ARC.
Last edited: