Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
So you end up with them both virtualized and that isn't an issue at all?

How much work is it to make this type of setup actually work, and what spec changes would I need to make it not slow and suck?


Sent from my iPhone using Tapatalk

The biggest issue is that if you have to restart/upgrade your ESXi, you have to take down your internet firewall/gateway. Think about what that means.

BUT that does mean that you don't have to run a separate box for your gateway.

pfSense needs a gig or so of RAM, at least 1 dedicated ethernet port (for WAN), preferably two, and a vCPU (or 2) depending on how much throughput you want. Of course, it only uses vCPU when you're actually using it... same with the RAM, it can swap with the ESXi host cache and page to your boot M.2.

As I mentioned my board has 2 gigabit ports, so one can be WAN, and one LAN. The LAN one can connect to my switch, and internally ESXi can connect any VMs I want directly the LAN group so that internet bound traffic doesn't even need to go to the switch.

Some say that doing this is less secure than a separate box. I'm sorry, but if someone knows how to break out of a VM guest into the hypervisor then they have bigger fish to catch than me...

The most important thing though is that there are two gigabit ports and one of them has the IPMI fallback set on it by default (the first). Do not put your WAN on that port! It would mean exposing IPMI to the interwebz.

Explicit pfSense/ESXi setup instructions coming soon ;)
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
The biggest issue is that if you have to restart/upgrade your ESXi, you have to take down your internet firewall/gateway. Think about what that means.

BUT that does mean that you don't have to run a separate box for your gateway.

pfSense needs a gig or so of RAM, at least 1 dedicated ethernet port (for WAN), preferably two, and a vCPU (or 2) depending on how much throughput you want. Of course, it only uses vCPU when you're actually using it... same with the RAM, it can swap with the ESXi host cache and page to your boot M.2.

As I mentioned my board has 2 gigabit ports, so one can be WAN, and one LAN. The LAN one can connect to my switch, and internally ESXi can connect any VMs I want directly the LAN group so that internet bound traffic doesn't even need to go to the switch.

Some say that doing this is less secure than a separate box. I'm sorry, but if someone knows how to break out of a VM guest into the hypervisor then they have bigger fish to catch than me...

The most important thing though is that there are two gigabit ports and one of them has the IPMI fallback set on it by default (the first). Do not put your WAN on that port! It would mean exposing IPMI to the interwebz.

Explicit pfSense/ESXi setup instructions coming soon ;)

HMMMMMMMMMMMMMM, this would prompt an upgrade to a Xeon if nothing else, but that is not a huge deal.

I also do currently run 8 gigs of RAM on my pfsense box, and use ~70% of it. I have A LOT going on in terms of pfBlockerNG. That being said, I am sure I can cut that down... Could also harvest the 850 evo I have in my pfsense box for this project, I just needed to use something, and I had it just laying around collecting dust. I will have think about this and wait for your detailed instructions. Half of the point of all of this is just the fun of it.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
HMMMMMMMMMMMMMM, this would prompt an upgrade to a Xeon if nothing else, but that is not a huge deal.

I also do currently run 8 gigs of RAM on my pfsense box, and use ~70% of it. I have A LOT going on in terms of pfBlockerNG. That being said, I am sure I can cut that down... Could also harvest the 850 evo I have in my pfsense box for this project, I just needed to use something, and I had it just laying around collecting dust. I will have think about this and wait for your detailed instructions. Half of the point of all of this is just the fun of it.

Right. The XeonD supports 128GB of RAM, so if you need to use 8GB of that for your gateway, so be it :)

This box is going into my home, to be the central server. There are no other 24/7 servers/pcs other than laptops, and various connected devices and STBs etc etc, so it will do everything, including home automation etc.

And I intend it to have a 5+ year life span. Xeon Ds are rated for 7+ years.

And reguarding cores, my XeonD has 8 cores (and 16 threads). The free version of ESXi has a per VM limit of 8 vCPU. I use 8 vCPUs for FreeNAS, and when slamming it at 1GB/s its hitting 60-70% utilization.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Which reminds me, its a good idea to start your build by updating the IPMI (which can be done through the web interface). And the BIOS.

The BIOS can be upgraded via an ISO through IPMI virtual media I believe... but I may be wrong on that... I forget. If that doesn't work, then just burn the ISO to a USB and physically connect it.

https://tinkertry.com/supermicro-superserver-bios-12a-and-ipmi-358-released-summer-2017

While doing the IPMI update, when it says wait and poweroff... just wait... it will eventually do its thing and come back... and if you power-off to soon, you'll need to do a DOS based recovery... which is unpleasant. Guess how I know.

You also need to make sure that your PCIe NVMe SSDs have the latest firmware. Previous models of some Samsung firmware has incompatibilities with these motherboards.

https://tinkertry.com/supermicro-su...n-cause-960-pro-and-evo-to-hide-heres-the-fix

With the Samsung M.2 drives, the easiest way to do this is to pop them into a Windows machine if you have one, and run the latest version of Samsung Magician. Alternatively, you can burn the ISO samsung provides to a USB and boot into their Linux based text-mode firmare updater. Easier to use Samsung Magician :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
About the LAN ports...

Screen Shot 2017-08-21 at 1.57.43 PM.png


A is the IPMI port
E is Lan Port 1 (gigabit)
D is Lan Port 2 (gigabit)
G is Lan Port 3 (10gbe)
F is Lan port 4 (10gbe)

The IPMI port will (by default, this can be disabled), fall back to using Lan Port 1, if the IPMI port is not connected. This means that at a minimum you can connect a single cable to Lan Port 1 and get both IPMI access to the BMC, and LAN access to your ESXi/FreeNAS/etc installations.

ESXi (6.5u1) does not include drivers out of the box for the Intel X557 chipset, ie the 10gbe ports. Thus in order to setup and install ESXi we will need a lan cable connected to Port 1.

Once we have ESXi up an running, we can change that to Port 3 and then use Port 4 for SAN access. Ideally you want to have your SAN traffic segregated from the rest of your LAN, because generally LAN traffic will trigger SAN traffic (or the other way round), and the last thing you want is one to cause a collision with the other, also it helps to remove broadcasts from the SAN.

Now, if you decide to setup pfSense as an internet gateway, you probably won't have more than a gigabit uplink to your internet connection, so it makes sense to use the gigabit ports for pfSense. So one would be LAN and the other WAN.

Do NOT use Port 1 for the WAN! The reason is because the IPMI port could failover to Port 1 and be exposed to the internet! Instead use Port 2 for your WAN and port 1 for your LAN. If you want to prevent the IPMI port being on your LAN (say you have a management network), then you can disable the fail-over in the BIOS, and connect the IPMI port to your management network, if you had one.

If you do have the IPMI running through Port 1, then I found that you lose access to the IPMI during the early stages of reboot. Not a huge problem, but there ya go.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
In todays episode...

Setting up ESXi with FreeNAS loop-back storage...

...and how to pass-through The Lynx Point AHCI controller to FreeNAS

Enjoy ;)

About PCIe Passthrough, and its importance to virtualized FreeNAS

http://www.freenas.org/blog/yes-you-can-virtualize-freenas/

You need to use PCIe pass-through to pass-through an HD controller in order to run FreeNAS safely in ESXi. I'm going to pass-through the built-in SATA/AHCI controller. Because of this I need to have another device to boot ESXi and to host the FreeNAS VM configuration. And I might as well use that device for the virtual disk image too.

An M.2 SATA drive would need the SATA controller, so I needed an M.2 PCIe NVME.

You could use a smaller drive, perhaps an Intel Optane 32GB.

If repeating this, you should ensure your SSD, Motherboard and BMC/IPMI firmwares are all up to date, as there are known issues with earlier versions of the various firmwares.

How to download the Free ESXi 6.5U1 ISO (aka vSphere Hypervisor 6.5 Update 1)
... and obtain a license.

You need to download the vSphere hypervisor ISO. I'd suggest starting with the 6.5 Update 1 ISO, more easily available 6.5a as this will prevent having to do an update later.

ESXi is an enterprise grade hypervisor for PC hardware. It allows you to efficiently and reliable run multiple "guest" OSes on a single "host". More advanced features are available for a price clustering, vmMotion and other datacentre features, but for a single host, the free version is not only free, but super reliable.

VMware offers a trial of 6.5U1, which includes the full download, and a Free license, whcih includes the full download of 6.5a, but they don't directly offer the ISO for 6.5U1 with a license key. But the 6.5U1 ISO *is* the Free version. So, register for a free of 6.5a license key, and then register for the trial of 6.5U1 to get the full ISO installer.

The free version of 6.5a with license registration:
https://www.vmware.com/au/products/vsphere-hypervisor.html

Screen Shot 2017-08-21 at 10.52.05 PM.jpg


And then register for a 6.5U1 trial to download the full 6.5U1 installer...
https://my.vmware.com/en/web/vmware/evalcenter?p=vsphere-6

Screen Shot 2017-08-21 at 2.35.34 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Installing ESXi via Supermicro IPMI

Next, spin up your Java enabled web browser that you use for IPMI Virtual Media mounting. I use IE11 in Windows 10 in a VM on my Mac OS X box. You can't (currently) mount virtual media through the HTML5 iKVM unfortunately.

Then log in to your IPMI console. If you don't know the IP address of your IPMI, you can see it on your BIOS boot screen. Or you can perhaps find it on your routers DHCP list, or you can use the IPMIview app to scan your network segment.

Remote Control -> Console Redirection, then Launch Console
Screen Shot 2017-08-21 at 2.44.07 PM.png


And then click through the 8 or so security warnings. Best not to update Java... its liable to lock you out.

Screen Shot 2017-08-21 at 2.46.57 PM.png

The Java iKVM in all its glory.

Choose Virtual Media -> Virtual Storage.

Screen Shot 2017-08-21 at 2.47.29 PM.png


Select a ISO from Logical Drive Type, then Open Image, and select the VMware 6.5.1U1 ISO.

Screen Shot 2017-08-21 at 2.49.49 PM.png


And plug it in...

Screen Shot 2017-08-21 at 2.50.22 PM.png

Yeah, "Plug-in OK!!"

And now we can power on the system...
Screen Shot 2017-08-21 at 2.52.43 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Screen Shot 2017-08-21 at 2.53.50 PM.png


When you see the prompt, hit DEL a few times to enter Setup...

(One of the outstanding issues with the X10SDV HTML5 iKVM is the virtual keyboard sometimes stops working. Either you might find that it won't detect your DEL to get into the BIOS, or it will, but then nothing will respond once you are in the bios... If this does happen to you, just reset the BMC (Maintenance -> Unit Reset). And if you reset the BMC... then you have to re-attach the virtual media.)

Screen Shot 2017-08-21 at 3.13.05 PM.png

Once you get to the BIOS, make sure that the BIOS is set to UEFI Boot Mode... and UEFI USB CD/DVD is first boot device.

Screen Shot 2017-08-21 at 2.58.50 PM.png


And save & exit (F4)

Screen Shot 2017-08-21 at 3.14.59 PM.png


ESXi Installer starts...

Screen Shot 2017-08-21 at 3.17.57 PM.png

Note: Build 5969303 is 6.5U1.

Screen Shot 2017-08-21 at 3.18.50 PM.png


Screen Shot 2017-08-21 at 3.19.56 PM.png


You want to select the SSD you'll be installing ESXi to. You DO NOT want to select one of your FreeNAS pool HDs.

Screen Shot 2017-08-21 at 3.20.38 PM.png

The Vmware installer will erase the entire contents of what ever drive you tell it to install to. You don't want to use a USB, although you can install ESXi to a USB, but you can't have a datastore on the USB, and we need a datastore.

(I'm not actually using a 1TB 960 Pro as my boot disk... its just for this tutorial, my actual boot disk is a 960 Evo 250GB)
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Screen Shot 2017-08-21 at 3.20.46 PM.png


Screen Shot 2017-08-21 at 3.23.45 PM.png


Yeay. Time to unplug the virtual media.

Screen Shot 2017-08-21 at 3.24.37 PM.png


Screen Shot 2017-08-21 at 3.24.45 PM.png


Plug-Out OK!! Stop!!

And then continue the ESXi reboot process...

Screen Shot 2017-08-21 at 3.25.29 PM.png


We'll set up the BIOS so when you boot via UEFI, it'll boot ESXi, and when you boot Legacy, it'll boot FreeNAS bare metal.

Re-enter the BIOS, and change the #1 UEFI boot device to the HD... ie your M.2 boot drive... and save changes and reset... ESXi should now boot.

Screen Shot 2017-08-21 at 3.26.36 PM.png


ESXi should now boot...

Screen Shot 2017-08-21 at 3.29.39 PM.png


So, since ESXi doesn't support the X552/X557 10gbe ethernet ports out of the box, you may have not remembered to connect your network to gigabit port 1... and if you did forget, then you'll get this...
Screen Shot 2017-08-21 at 3.30.22 PM.png


...so to fix the "http://0.0.0.0/", just plug in your network to Port 1 and restart...

And this time your hypervisor will probably pickup a DHCP address. We'll change it to a static IP when we setup our 10gbe ports.

Screen Shot 2017-08-21 at 3.35.12 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Logging in to the ESXi GUI

Login to ESXi's web gui, at the management address on the console screen, Chrome works best.
Screen Shot 2017-08-21 at 3.36.45 PM.png
Screen Shot 2017-08-21 at 3.37.15 PM.png


You'll see that your "datastore1" has been created...

Screen Shot 2017-08-21 at 3.38.42 PM.png


Assigning your license to ESXi


lets get rid of the trial... use the license key on the 6.5a download page...

Manage->Licensing, Assign License
Screen Shot 2017-08-21 at 3.39.55 PM.png



Screen Shot 2017-08-21 at 4.02.37 PM.jpg

Enter you license, check it, and assign it.

Enable SSH and the ESXi shell

The ESXi Shell, allows you to use the esxcli tool to access and manage almost all hypervisor state via SSH. It will be necessary for scripting purposes.

Click on Manage -> Services

Screen Shot 2017-08-21 at 4.05.16 PM.png

Start the ESXi Shell and SSH

Screen Shot 2017-08-21 at 4.05.32 PM.png



There they are...

Screen Shot 2017-08-21 at 4.05.43 PM.png


And set their Policy start/stop with Host

Screen Shot 2017-08-21 at 4.05.55 PM.png


You can now ssh in to your hypervisor as root.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Enable Host Swap

We'll enable Host swap on your speedy m.2 NVMe boot disk/datastore. Host swap is used not only for the ESXi host to swap if it needs to, but for caching and for non reserved memory VMs to memory balloon, which allows you to run more VMs with less memory.

Manage -> System, Swap
Screen Shot 2017-08-21 at 4.08.09 PM.png


Just set the Datastore to "datastore1", and enable all options.

Setting up NTP time service

NTP is used to ensure that the ESXi host has an accurate clock. This is important when you are joining an Active Directory domain.

Find your local NTP pool. http://www.pool.ntp.org/en/

I'm in australia, so I'm using the AU pool.
http://www.pool.ntp.org/zone/au

So I'll add the following servers:
Code:
0.au.pool.ntp.org
1.au.pool.ntp.org
2.au.pool.ntp.org
3.au.pool.ntp.org


Manage -> System, Time & Date, Edit settings.
Screen Shot 2017-08-21 at 4.13.08 PM.png

Screen Shot 2017-08-21 at 4.13.13 PM.png


Start the NTP service
Screen Shot 2017-08-21 at 4.13.24 PM.png


And set its Policy to start/stop with the host.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Installing the Intel X552/X557 10Gbe Ethernet Drivers for VMware 6.5U1

Networking -> Physical NICs
Screen Shot 2017-08-21 at 4.21.22 PM.png

If you look at the Physical NICs in Networking you'll see that vmnic2,3 don't exist

Best way I know to find the right drivers is to search for it on the HCL
https://www.vmware.com/resources/compatibility/search.php

"ixgbe", and this is the one you want...

Screen Shot 2017-08-21 at 4.29.43 PM.png


Which leads to this page...

Screen Shot 2017-08-21 at 4.30.09 PM.png


And if you read all the release notes, you'll find out that the right version is 4.5.1, and not 4.5.2. 4.5.2 is specific to Fibre Channel.

And assuming 4.5.1 is still the right version, the download page for 4.5.1 is here:
https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXI60-INTEL-IXGBE-451&productId=491

Then just download the driver, and unzip the archive.

Inside the archive, there will be a "doc" directory with installation instructions. Which you can skip ;)

We want to upload the VIB, "net-ixgbe_4.5.1-1OEM.600.0.0.2494585.vib" to our datastore.

Go to Storage, then click "Datastore Browser" to browse datastores.
Screen Shot 2017-08-21 at 4.32.42 PM.png


Screen Shot 2017-08-21 at 4.32.52 PM.png


I like to create a "downloads" directory on the root of my primary datastore... use the Create Directory button... then click Upload...

Screen Shot 2017-08-21 at 4.33.06 PM.png


And select the VIB.

Screen Shot 2017-08-21 at 4.33.28 PM.png


It'll start uploading immediately. When it finishes, which should be quite quick since its so small, you should see...

Screen Shot 2017-08-21 at 4.33.43 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
ssh in to the hypervisor as root cd/ls around a bit and get to our datastore...

Screen Shot 2017-08-21 at 4.37.30 PM.png


The datastores are in /vmfs/volumes, in this case cd /vmfs/volumes/datastore1

And then the command to install the VIB is:
esxcli software vib install -v /vmfs/volumes/datastore1/downloads/net-ixgbe_4.5.1-1OEM.600.0.0.2494585.vib

You have to use a full path... you can use tab completion... I like to cd/ls around to see where I am... and then get the full path with pwd

And then the driver installs. A reboot is required.

Screen Shot 2017-08-21 at 4.40.48 PM.png

reboot

(if you want to save a reboot, you could perform the AHCI hack below first before rebooting...)

but lets reboot, and see if our 10gbe ports wake up...

I like to watch the hypervisor shutdown/boot via IPMI... then I know when its back...

Screen Shot 2017-08-21 at 4.42.55 PM.png


And when its back, we go to Networking -> Physical NICs

Screen Shot 2017-08-21 at 4.44.59 PM.png

And our 10gbe NICs are there. Great.

Changing the managment network to use the 10gbe Lan Port

I'm going to connect up LAN port 3/4 now... and disconnect Port 1. IPMI only ever functions through the IPMI port, or if failover is enabled, Port 1, so I already have a network cable directly connected to the IPMI port.

Of course, when you disconnect Port 1 you'll lose connectivity to your ESXi host... so, back to the IPMI iKVM console again.

Screen Shot 2017-08-21 at 4.46.51 PM.png

Hit F2 to customize the network... and login as root.

Screen Shot 2017-08-21 at 4.47.22 PM.png

Then Configure Management Network...

And select the right network adapter.
Screen Shot 2017-08-21 at 4.47.53 PM.png

i350 is gigabit, X557 is 10gbe.

Edit the IPv4 Configuration
Screen Shot 2017-08-21 at 4.48.15 PM.png


Screen Shot 2017-08-21 at 4.48.40 PM.png


I'm going to use static IP. Because we used DHCP initially, the gateway, netmask and DNS information are already correct.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Exit out, and restart managment network.
Screen Shot 2017-08-21 at 4.49.30 PM.png


Screen Shot 2017-08-21 at 4.49.50 PM.png


And your management network should be running off your 10gbe ports now...

Screen Shot 2017-08-21 at 4.50.26 PM.png


This is actually the last time you ever need to use the esxi console again.

Lets log back in to ESXi GUI on the above IP address...
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
ESXi Virtual Networking

By default you have the following port groups...

Screen Shot 2017-08-21 at 4.53.32 PM.png


And they're both connected to vSwitch0, which is connected to vmnic2.

Screen Shot 2017-08-21 at 4.56.29 PM.png


In ESXi you have a number of virtual switches... which have a set of virtual port groups. You connect VMs to port groups via their virtual NICs (vNICs)... and you connect port groups to virtual switches... and then those virtual switches uplink to real switches through vmNics which represent... physical NICs... pNics.

And the vmKernel, which ESXi runs its services over, also needs to connect to vSwitches. The ESXi host communicates through its vmKernel NIC, so when you connect to the management interface, or when ESXi mounts an NFS datashare, or access iSCSI, that is perfromed through the vmKernelNic, ie its the host's network interfaces.

BTW, you can rename vSwitches, but it involves config file hacking...

Setting up a Storage Network

The Storage Network will be used for loopback NFS and iSCSI storage to store VMs on. Alternatively, you could just store VMs on the SSD datastore.

Its easier to setup the Storage Network before setting up any VMs.

We're going to setup a new vSwitch, we're not going to add an uplink (although we could use Lan Port 4 if we wanted to)

We're also going to add a "Storage Network" port group, which will give to FreeNAS to serve iSCSI and NFS on, and we'll also create a "Storage Kernel" port group for ESXi to be able to access those NFS and iSCSI shares.

Networking -> Virtual Switches, Add standard virtual switch
Screen Shot 2017-08-21 at 7.43.22 PM.png

Screen Shot 2017-08-21 at 7.43.55 PM.png

With MTU 9000.
Erase the uplink.

Now, Add a port group, Networking -> Port groups, Add port group
Screen Shot 2017-08-21 at 7.44.06 PM.png

Screen Shot 2017-08-21 at 7.44.31 PM.png

Set the Port group to use vSwitch1, and call it "Storage Network"

Add another port group for "Storage Kernel"
Screen Shot 2017-08-21 at 7.44.58 PM.png

Again, vSwitch1

Next add the VMkernel NIC for ESXi to use for iSCSI and NFS, Networking -> VMkernerl NICs, Add VMkernel NIC
Screen Shot 2017-08-21 at 7.45.21 PM.png

Screen Shot 2017-08-21 at 7.54.08 PM.png

Set port group to Storage Kernel
MTU 9000

Set the IP address, I'll use 10.55.0.XX addresses for ESXi hosts and 10.55.1.xx for ZFS hosts. on a 255.255.0.0 network.

Note, there is no point enabling any services on this vmkernel nic, as the free version of ESXi doesn't support any of the vCenter features. All we need to do is create a vmKernel NIC for each storage port group, this is the equivalent of adding a port group to the esxi host machine itself, rather than to a VM.

This vmkernel nic will be used by ESXi to connect to iSCSi and NFS, when we targe the 10.55.xx.xx subnet.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Setup ESXi to pass-through Lynx Point AHCI controller to FreeNAS

We need to setup AHCI pass-through so we can install FreeNAS and access the pool HDs with no virtualization in the way.

By default, VMware won't let you pass-through the AHCI controller...

First, lets take a look at the PCIe devices in the system

Host -> Manage -> Hardware
Screen Shot 2017-08-21 at 4.59.22 PM.png


Greyed out PCI devices aren't available for PCIe passthrough. Other devices are available for PCIe pass-through, but are currently disabled, like my Intel P3700 PCie NVMe.

You can click on a normal pass-through device, then enable pass-through. You then need to reboot so that ESXi will not take ownership of the device at boot time, and then you can pass it through to any VM, as if the VM had bare metal access to the device. This is the magic of VT-d, or Virtualization Technology for Devices.

I'll Toggle the pass-through on my P3700 because my bare metal FreeNAS install uses it for swap/slog and l2arc... more on that later...

And a notice appears:

Screen Shot 2017-08-21 at 4.59.35 PM.png


And the devices Pass-through status changes to:

Screen Shot 2017-08-21 at 4.59.41 PM.png

Enabled / Needs reboot.

But it we search for AHCI, we'll find the Lynx Point AHCI controller, and that its not capable of pass-through...

Screen Shot 2017-08-21 at 5.01.58 PM.png


Well, lets fix that :)

ssh into your hypervisor, and
cd /etc/vmware
ls

Screen Shot 2017-08-21 at 5.04.09 PM.png


I'd suggest making a backup of your PCIe pass-through map as you need to edit it with vi, and its very easy to mess things up with vi.

cp passthru.map passthru.map-bak

then edit it in vi
vi passthru.map

Screen Shot 2017-08-21 at 5.05.14 PM.png

Screen Shot 2017-08-21 at 5.07.33 PM.png


vi tutorial:
https://www.howtogeek.com/102468/a-beginners-guide-to-editing-text-files-with-vi/

We need to add the PCie ids for the Lynx Point AHCI controller. I've already found them, and they're the same for all X10SDV boards afaik... but if you wanted to find them yourself, you could spelunk through lspci

So, you need to add the following

Code:
# CUSTOMS
# Intel Lynx Point AHCI
8086 8c02 d3d0 false

# /CUSTOMS

to the pass-through map

Screen Shot 2017-08-21 at 5.11.46 PM.png


I've experimented with passing through the XHCI controller... that works and provides perfect USB3 performance to the VM, but there is only one USB3 controller on this board, and the USB2 hubs are ganged off it, which means you can't use any other USB devices with any other VMs, which is far too limiting.

One you've made that change, its reboot time.

reboot

Or in the gui...

Screen Shot 2017-08-21 at 5.13.04 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Now, when we go to Manage hardware, firstly we can see our PCIe NVMe is Active for pass-through (if you did that)

Screen Shot 2017-08-21 at 5.22.54 PM.png


And secondly, we can see that the AHCI controller is now available for pass-through,

Screen Shot 2017-08-21 at 5.23.07 PM.png


so lets enable it...

Screen Shot 2017-08-21 at 5.23.39 PM.png


and reboot...

Screen Shot 2017-08-21 at 5.23.48 PM.png


And when we come back...

Screen Shot 2017-08-21 at 5.32.09 PM.png


And there we have it...

(if you wanted to, you could've done the PCIe pass-through hacking when we first used SSH for instaling the 10gbe drivers... and then we could've enabled both the PCIe NVMe and AHCI controller at the same time. )

Now, we can give FreeNAS direct and exclusive access to all SATA drives connected to the X10SDV motherboard, as well as your NVMe SLOG device (if you have one). Which is what you need to do to successfully virtualize FreeNAS.

In my fairly heavy testing this works flawlessly. Passing through a SATA controller might not work on all motherboards, but it does seem to work well on the XeonD X10SDV boards.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
So, now we're ready to install FreeNAS!!!

This install process will require a config backup, if you have a config you want to preserve.

Firstly, download a FreeNAS install ISO... You can direct download via http to ESXi if you disable the outbound firewall... but its VERY slow... so its much faster to download to your local machine and then upload that to your data store.

Alternatively, if you have another FreeNAS box... you could mount an NFS directory. But, I think its a good idea to keep the FreeNAS install iso on your data store... that way, if something does go wrong with your FreeNAS install, you can always install it again...

Anyway, go to your favourite FreeNAS download page...

https://download.freenas.org/11/latest/x64/

And download the latest full iso...

Once the ISO has downloaded, in the ESXi GUI, use the datastore browser to navigate to your downloads directory, and upload the iso.

You can close the datastore browser immediatley, and wait for the task to complete...

Screen Shot 2017-08-21 at 5.38.36 PM.png


Once it has, we'll make a FreeNAS VM.

Click on Virtual Machines...

Screen Shot 2017-08-21 at 5.39.19 PM.png


Then Create / Register VM...

Screen Shot 2017-08-21 at 5.39.30 PM.png


Create a new virtual machine

Screen Shot 2017-08-21 at 5.39.58 PM.png


You need to select OS = Other and FreeBSD 64bit

Screen Shot 2017-08-21 at 5.40.19 PM.png


Select a local datastore

Then we can do initial config of the VM. You should select at least 2 vCPU, but I'm going to use 8, because I want FreeNAS to be performant, and that is the maximum allowed in free ESXi.

Screen Shot 2017-08-21 at 5.42.20 PM.png


Then set the Memory to at least 8 GB, I'm using 24GB.

You must set "Reserve all guest memory", because that's required for PCIe Pass-through.

Next, lets create a virtual boot HD.

Screen Shot 2017-08-21 at 5.43.06 PM.png


I'm using 14GB because I have 14.3GB USB drives (they're actually 16GB on the packet...), and I want to be able to boot bare-metal FreeNAS... which means I need to be able to add the USB drives to this boot disk as mirrors... and that means they need to be larger... so I'm using 14GB.

Thin provisioned just means it will use minimal space, and will zero on demand. Since its a boot disk, and rarely sees any write load, it doesn't matter.

BTW, you will want a config backup. The USB drives will get erased.

The virtual disk is shared to FreeNAS via SCSI Controller 0, which is an LSI Logic Parallel SCSI card. This is optimal and the default.

Screen Shot 2017-08-22 at 2.41.22 PM.png

Next, switch USB Controller to USB 3.0, VMware will warn that its not supported, but it is supported in FreeNAS 11.

Then set the Network Adapter to VMXNET 3, and on the "VM Network". And add a second Network Adapter, and set that to "Storage Network" and VMXNET 3.

And then set CD/DVD to Datastore ISO file, and select your FreeNAS install ISO.
 
Last edited:
Top