Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Next, Add "other device" -> PCI Device

Screen Shot 2017-08-21 at 6.23.27 PM.png


And select the AHCI Lynx Point device.

Screen Shot 2017-08-21 at 6.23.39 PM.png


And repeat for your PCIe NVME SLOG device if you have one...

Screen Shot 2017-08-21 at 5.50.24 PM.png


These two devices will be passed directly to the VM. The VM will have bare-metal access to these devices.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
This is what it should look like, of course, your vCPU, Memory and HD size should be different depending on your requirements.

Screen Shot 2017-08-22 at 2.41.56 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
And now you have a VM ready to start...

Screen Shot 2017-08-21 at 6.24.47 PM.png


Click Power-on... and after the Grub screen, continue to install FreeNAS

Screen Shot 2017-08-21 at 6.26.07 PM.png


You only want to install onto your Virtual Disk. Do not install onto your HDs! Or onto USBs either.

Screen Shot 2017-08-21 at 6.26.17 PM.png

Yes you want to erase all partitions...

Screen Shot 2017-08-21 at 6.26.55 PM.png

You definitely want to Boot via BIOS. ESXi does not support booting via UEFI.

And when that's done...

choose Shutdown System.

Screen Shot 2017-08-21 at 6.36.45 PM.png


And wait for the VM to shutdown...

Screen Shot 2017-08-21 at 6.37.28 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Preparing for First Boot after FreeNAS Install

Now, you can Edit the VM...

Screen Shot 2017-08-21 at 6.37.34 PM.png


Might as well remove the CD/DVD, save, then remove the SATA controller. This should in theory provide a minor optimization.

Now that we've booted the VM once, our Network Adapters have MAC addresses. Its very useful to make a note of these to prevent mixing up the interfaces during configuration inside the VM.

Screen Shot 2017-08-22 at 2.42.41 PM.png


And I save a note in the Notes field...

Screen Shot 2017-08-22 at 2.43.25 PM.png






And we can restart.
Screen Shot 2017-08-21 at 6.46.23 PM.png


But when I go to Chrome...

Screen Shot 2017-08-21 at 6.48.10 PM.png



Chrome will sometimes not let you connect to the same IP which was hosted by a different system previously... in this example, my FreeNAS instance has acquired 192.168.0.16 IP, which was used when I first setup the ESXi instance... which means that Chrome has cached the page... I can fix it by clearing my cached downloads...

Screen Shot 2017-08-21 at 6.50.17 PM.png


And now we can log in, import our config, and import our pool :)

Screen Shot 2017-08-21 at 6.55.52 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Configuring the vNic Network Interfaces in FreeNAS

Before adding the virtual network interfaces, I'm going to run ifconfig | less in the console shell (9) to display the NICs and their MAC addresses...

Screen Shot 2017-08-22 at 5.46.00 PM.png


Here you can see that vmx0 is vNic1 (ie Storage Network) and vmx1 is vNic2 (ie VM Network). If you add further vNICs, be sure to check the interfaces again because the order can change when you add additional vNICs.

I'll update the notes...
Screen Shot 2017-08-22 at 5.49.58 PM.png


So, first lets add the Storage Network interface (vmx0)

(Its probably a good idea to do the Management network interface first... then you won't have to use the console to correct it!)

Network -> Interfaces, Add Interface

Screen Shot 2017-08-22 at 6.13.37 PM.png


Selecting vmx0, naming it Storage Network and setting the static IP and mask. I'll also add the "mtu 9000" option...

And when I save, I lose connectivity.

Pressing enter in the console, I see that the management IP has changed...
Screen Shot 2017-08-22 at 6.15.55 PM.png


so I guess I need to update the vmx1 interface via the console.
Screen Shot 2017-08-22 at 6.17.25 PM.png


And then it comes good...
Screen Shot 2017-08-22 at 6.22.38 PM.png


ESXi is using jumbo frames for storage, the vSwitch is set to support jumbo, and FreeNAS is now using Jumbo. This does make a performance improvement and will increase throughput from 2-5gbps to 20gbps in internal networking.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Mirroring your virtual boot disk onto a USB so that you can bare-metal boot FreeNAS

This is optional... but I find it useful to be able to boot into FreeNAS without ESXi. Worst case, if I screw up my ESXi install... I can still get to all my FreeNAS files. I think that's a good safety net.

Now... lets setup bare metal boot again...

Assuming you have a thumb drive you want to bare metal boot off... if that's already got FreeNAS installed... its a good time to either use another thumbdrive... or one of your two mirrors... because the first thing we need to do is erase it.

You need to erase it because FreeNAS will load the boot pool from a USB in preference to the virtual disk on boot... and you can't dynamically add a USB to a VM guest with PCIe pass-through enabled... So we need to shutdown FreeNAS... and if we do that, then the VM will load the boot pool off the USB... which... well.. Just erase the USB.

The easiest way to do this is simply to erase the USB in a normal client PC... say a Windows box... or a mac...

Screen Shot 2017-08-21 at 6.57.24 PM.png


Then shutdown your FreeNAS guest

Next, plugin your erased USB, edit your FreeNAS VM, and add a USB Device

Screen Shot 2017-08-21 at 7.01.08 PM.png


And select the USB drive (its probably defaulted)

Screen Shot 2017-08-21 at 7.01.13 PM.png


Then start the FreeNAS VM again.

...when that finishes booting, login to the GUI, then go to Boot -> Boot Status, select "freenas-boot", and click "attach" at the bottom.

Screen Shot 2017-08-21 at 7.05.32 PM.png


And select the USB (will probabaly be da1)

Screen Shot 2017-08-21 at 7.05.53 PM.png



It will begin resilvering. I like to use the shell to perform zpool status freenas-boot repeatedly to watch the resilver occur...

Screen Shot 2017-08-21 at 7.08.32 PM.png



Once its resilvered, you now have a physical boot mirror which you can use to boot bare-metal.

Shutdown FreeNAS, then Shutdown ESXi...
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Booting FreeNAS Bare-metal.

Next, use the iKVM, to power on the system, and enter BIOS.

Select Boot Mode: Legacy, and make sure the USB Key is the first boot device.

Screen Shot 2017-08-21 at 7.16.22 PM.png


Save and Exit (F4)

And FreeNAS bare-metal should boot.

But the network interface needs to be re-added

ix0 is Port 3
ix1 is Port 4
igb0 is Port 1
igb1 is Port 2.

I want to configure ix0. DO NOT use the same IP address as you use in the ESXi VM, although it will let you, you won't be able to login to the UI when booted inside ESXi.

Screen Shot 2017-08-22 at 7.02.18 PM.png


And you should now be able to login to your bare-metal install.

But your boot volume will be degraded, because the virtual disk is missing.

Screen Shot 2017-08-21 at 7.25.15 PM.png


BUT, that's okay... when you reboot to ESXi it will resilver. If you wanted to you could add a second USB mirror now...

You can shutdown, re-enter BIOS, set Boot Mode to UEFI, save/exit and it will auto-start ESXi.

When ESXi finishes loading, start the FreeNAS VM.

You'll be greeted by a critical alert...

Screen Shot 2017-08-21 at 7.37.26 PM.png


This is just the virtual disk resilvering to catch up with the USB.

Screen Shot 2017-08-21 at 7.37.36 PM.png

zpool status freenas-boot

will confirm this. 246KB resilvered.

zpool clear freenas-boot will get rid of the warning.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
<reserved>
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Setting up an NFS Datastore

Firstly, lets create an vm_nfs dataset to store our NFS based VMs.
Screen Shot 2017-08-21 at 8.47.45 PM.png

enable atime is off.

Then add an NFS share...
Screen Shot 2017-08-21 at 8.48.32 PM.png

Screen Shot 2017-08-21 at 8.49.13 PM.png


Notice, maproot user/group is root/wheel. Also, Authorized Networks is our storage network.

Screen Shot 2017-08-21 at 8.49.23 PM.png

and enable NFS...

Next, go back to the ESXi GUI.

Datastores -> New datastore
Screen Shot 2017-08-21 at 9.04.11 PM.png


Then select "Mount NFS Datastore"
Screen Shot 2017-08-22 at 7.12.36 PM.png


Screen Shot 2017-08-21 at 9.04.54 PM.png

NFS server is your FreeNAS storage IP. NFS share is the full mount path of the dataset you shared.

Theoretically, you could use NFSv4, and its probably more performant, but this is simpler.

Screen Shot 2017-08-21 at 9.05.00 PM.png


And now you have a mounted nfs data store.

Screen Shot 2017-08-21 at 9.05.12 PM.png

Screen Shot 2017-08-21 at 9.05.27 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Setting up an iSCSI datastore in FreeNAS.

First, create a zvol. Use 64KB blocks, its a good compromise.
Screen Shot 2017-08-21 at 9.31.57 PM.png

I like to use sparse zvols... then I can allocate storage as I need it...

And there it is...
Screen Shot 2017-08-21 at 9.32.37 PM.png


Next go to Sharing, iSCSI...

Add a Portal, Initiator, Target, Extent and Associated Targets
Screen Shot 2017-08-21 at 9.33.13 PM.png

Note: IP address is the Storage IP.
Screen Shot 2017-08-21 at 9.33.21 PM.png


Screen Shot 2017-08-21 at 9.33.28 PM.png

Screen Shot 2017-08-21 at 9.33.50 PM.png


Screen Shot 2017-08-21 at 9.34.53 PM.png

Screen Shot 2017-08-21 at 9.37.10 PM.png

Screen Shot 2017-08-21 at 9.37.19 PM.png

Screen Shot 2017-08-21 at 9.37.31 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Then startup the iSCSI service...
Screen Shot 2017-08-21 at 9.38.14 PM.png


Enabling iSCSI and mounting an iSCSI datastore in ESXi

Now its time to go to vmware to enable iSCSI...

Rather than mounting an NFS datastore, with iSCSI you need to enable the iSCSi "hba"

Storage -> Adapters, then Configure iSCSI

Screen Shot 2017-08-21 at 9.39.14 PM.png

Screen Shot 2017-08-21 at 9.39.26 PM.png

Enable, and save.

Configure iSCSI, add a dynamic target for your FreeNAS iSCSI address...

Screen Shot 2017-08-21 at 9.45.27 PM.png


close & save...

Configure iSCSI...

Screen Shot 2017-08-21 at 9.46.08 PM.png


Hey look, its now got a static target...

Rescan the Adapters (not refresh), because, it might help.

then go to the devices tab.

Screen Shot 2017-08-21 at 9.46.53 PM.png


You should see the FreeNAS iSCSI disk. It will be degraded. This is because its not multi-homed. Oh well. If you really wanted to you could create another vSwitch and add that and a port group, and all of that... or you could just not worry about it, since its all an internal network anyway.

If you click on the device, then you can create a datastore on it...

Screen Shot 2017-08-21 at 9.48.19 PM.png


Screen Shot 2017-08-21 at 9.48.44 PM.png



Screen Shot 2017-08-21 at 9.48.56 PM.png

Screen Shot 2017-08-21 at 9.49.02 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Screen Shot 2017-08-21 at 9.49.07 PM.png

Screen Shot 2017-08-21 at 9.49.13 PM.png

Screen Shot 2017-08-21 at 9.49.20 PM.png

And there we have it...

Now... that's only a 100GB zvol... lets resize it!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Resizing an iSCSI datastore in ESXi/FreeNAS

So, say you have a 100GB zvol for ESXi and you want to resize it... its pretty simple.

Edit the zvol, to increase its size...
Screen Shot 2017-08-22 at 2.23.24 AM.png

Here i'm growing it from 100G to 200G.

Select the device in ESXi, and click Increase Capacity.
Screen Shot 2017-08-22 at 2.24.39 AM.png

Screen Shot 2017-08-22 at 2.25.55 AM.png


Expand an existing VMFS datastore extent

Screen Shot 2017-08-22 at 2.26.05 AM.png

Select the datastore

Screen Shot 2017-08-22 at 2.26.12 AM.png


Select the device

Screen Shot 2017-08-22 at 2.26.25 AM.png


Select the partition you wish to grow...
Screen Shot 2017-08-22 at 2.26.36 AM.png


Confirm...

Screen Shot 2017-08-22 at 2.26.42 AM.png


And its done...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
In tomorrow's episode... we'll make some VMs ;)

EDIT: or not.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Adding Swap and L2ARC using Virtual Disks

There is a fast M.2 NVMe disk with 250GB available installed in this system, but the ESXi install and the FreeNAS boot disk only use about 12GB of that. Seems like a waste.

Although, best recommendations are to provide FreeNAS with physical access to disks, the swap and l2arc devices are ephemeral, and thus it doesn't really matter if they are virtual devices.

In my testing, the VMware pass-through virtual devices are quite performant.

So, I'll create a couple of Virtual disks in the FreeNAS VM, and use those for swap and L2ARC.

I'll use a 16GB swap disk, but you could use whatever size you preferred. I'll use a 64GB L2ARC, as I found on a 16GB FreeNAS install, 128GB causes performance issues. The more memory your FreeNAS VM has, then the more L2ARC you can provide.

Increasing either disks size is quite easy in the future, and since everything is virtual, you can just remove and re-add.

I will set the Shares to High. This is the priority that Vmware puts on satisfying the virtual disks i/o requests, and since this is an AIO setup, if FreeNAS is waiting on an L2ARC or Swap request, then everything is.

I will use Thin provisioning for these disks, as the swap will be used only minimally, and thus not take any space unless its being used, and the L2ARC is similar. When shutting down, I believe the disk usage is released.

Also, I will use Independant Persistant provisioning. This means that when you backup the VM it will not backup the L2ARC and Swap disks, which will save space in your backup, and since they are ephemeral that's fine, if you had to restore your VM from backup, it would be trivial to re-create/configure the swap/l2arc volumes.

Firstly, lets create the two Virtual disks. If you are using PCIe pass-through you'll need to shutdown the VM to add disks to it.

Creating the virtual disks

Edit the FreeNAS VM settings, Add hard disk -> New hard disk, twice.

Screen Shot 2017-08-24 at 5.04.22 PM.png


The two new hard disks appear...

Screen Shot 2017-08-24 at 5.05.59 PM.png


As mentioned, two disks, one 64GB and one 16GB, both Thin provisioned, both Independent - persistent, and both with Shares set to High.

The disks will tell you their SCSI controller ID (ie 0:2 and 0:3), if you wanted to re-arrange the device ordering in FreeNAS, you can do that by adjusting the controller order.

Next save and boot your VM.

Once booted, you can see the disks in Storage -> View Disks.

Screen Shot 2017-08-24 at 5.20.31 PM.png

It may be a good idea to use the Description field to label the swap disk, the l2arc disk will no longer be present once you add it to a pool, but the swap disk will always be present.

Adding L2ARC

Adding the l2arc disk is trivial.

Storage -> Volumes, Volume Manager

Screen Shot 2017-08-24 at 5.23.41 PM.png


Then in the Volume Manager
Screen Shot 2017-08-24 at 5.23.57 PM.png

Select the Volume to Extend, add the L2ARC disk, and ensure you've set it's type to Cache. When you're sure that everything is right, hit Extend Volume.

And your L2ARC has been added. Congratulations.

You can confirm this by clicking on the Volume Status icon, which you get to by going to Storage -> Volumes, selecting the top level of your pool, and its down the bottom of the screen...

Screen Shot 2017-08-24 at 5.22.50 PM.png


and the volume status with cache on da2p1
Screen Shot 2017-08-24 at 5.26.15 PM.png


NOTE: when booting bare metal, your pool might not mount without being forced to mount, since the virtual L2ARC is missing. If you wanted to, you could offline or remove the l2arc before rebooting.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Adding a non-pool swap device

The virtual swap device is a little bit trickier than the L2ARC as you need to use the command line.

I documented the generalized instructions for adding any partition or disk as swap in a FreeNAS system, as well as how to temporarily disable the built-in pool based swap devices (which are prone to crashing your server if a disk should fail)

Resource: How to relocate swap to an SSD or other partition

In this case, "da3" is my swap disk. If you get an error on the gpt create, you selected the wrong disk!

Code:
root@freenas:~ # gpart create -s gpt da3
da3 created
root@freenas:~ # gpart add -i 1 -t freebsd-swap da3
da3p1 added
root@freenas:~ # glabel status | grep da3
gptid/4fab0ae9-889f-11e7-be79-000c290148d6	 N/A  da3p1


And that's the swap disk initialized and ready to use. My gptid id is "4fab0ae9-889f-11e7-be79-000c290148d6"

So next I add a Post-Init command with the following:

Code:
swapoff -a ; grep -v -E 'none[[:blank:]]+swap[[:blank:]]' /etc/fstab > /etc/fstab.new && echo "/dev/gptid/4fab0ae9-889f-11e7-be79-000c290148d6.eli none swap sw 0 0" >> /etc/fstab.new && mv /etc/fstab.new /etc/fstab ; swapon -a

Note: I changed the gptid in the post-init command to match my gptid.

Screen Shot 2017-08-24 at 5.43.26 PM.png

Screen Shot 2017-08-24 at 5.43.34 PM.png


And that will now disable pool based swap and enable the virtual hd swap on restart.

If you don't want to re-start immediately, you can just run the same command in a shell.

Screen Shot 2017-08-24 at 5.45.42 PM.png


And if you look at the system console, you'll also see the pool swap devices were disabled, and the new swap device was enabled...

Screen Shot 2017-08-24 at 5.45.33 PM.png


And finally, in top / Display System Processes, you can see the swap is available too.

Screen Shot 2017-08-24 at 5.46.54 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Thermal Testing

Before coming up with the final fan design and fan control settings I had to do some thermal testing

I showed some of that in this post

Screen Shot 2017-08-11 at 8.21.56 PM.png


I'm running tmux, with the fan controller in the primary pane, solnet array tester, mprime and cpu temperature logging. I performed these tests on bare metal so that mprime would have access to all the cores/threads for maximum utilization, and also for access to all the core temperatures.

As a result of this early testing I decided to upgrade the Exhaust fan to the Noctua 140mm.

Its good to know that my HD and CPU temps are now very reasonable at idle, with my fans spinning at their low setting, and very quiet.

Screen Shot 2017-08-24 at 6.01.31 PM.png


And flatout, the CPU doesn't tend to go above 73C, and the HDs won't go past 40C. And that's with 7200rpm drives.

Even flatout, its not objectionable anymore.

A success.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thermal Testing

Before coming up with the final fan design and fan control settings I had to do some thermal testing

I showed some of that in this post

View attachment 20272

I'm running tmux, with the fan controller in the primary pane, solnet array tester, mprime and cpu temperature logging. I performed these tests on bare metal so that mprime would have access to all the cores/threads for maximum utilization, and also for access to all the core temperatures.

As a result of this early testing I decided to upgrade the Exhaust fan to the Noctua 140mm.

Its good to know that my HD and CPU temps are now very reasonable at idle, with my fans spinning at their low setting, and very quiet.

View attachment 20273

And flatout, the CPU doesn't tend to go above 73C, and the HDs won't go past 40C. And that's with 7200rpm drives.

Even flatout, its not objectionable anymore.

A success.
Nice documentation. Thanks!
More tomorrow?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
#wow


Sent from my iPhone using Tapatalk
 
Top