Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Installing macOS Sierra in ESXi 6.5U1

Firstly, to install Mac OS X, you need to build an installer ISO, and to do that you will need an instance of Mac OS X.

This is done on a Mac.

Building a Sierra ISO
You will need a copy of the "Install macOS Sierra.app", if you don't already have that, you can download it by going to the app store and searching for "macos sierra"

Apple's instructions are here:
https://support.apple.com/en-au/HT201475

Once you have download it, you need to create an ISO image from the Installer.

A script to do this can be found here:
https://gist.github.com/julianxhokaxhiu/6ed6853f3223d0dd5fdffc4799b3a877

I've reproduced the script, with an addition of an rm /tmp/Sierra.cdr.dmg below:
Code:
#!/bin/bash
#
# Credits to fuckbecauseican5 from https://www.reddit.com/r/hackintosh/comments/4s561a/macos_sierra_16a$
# Adapted to work with the official image available into Mac App Store
#
# Enjoy!

hdiutil attach /Applications/Install\ macOS\ Sierra.app/Contents/SharedSupport/InstallESD.dmg -noverify$
hdiutil create -o /tmp/Sierra.cdr -size 7316m -layout SPUD -fs HFS+J
hdiutil attach /tmp/Sierra.cdr.dmg -noverify -nobrowse -mountpoint /Volumes/install_build
asr restore -source /Volumes/install_app/BaseSystem.dmg -target /Volumes/install_build -noprompt -nover$
rm /Volumes/OS\ X\ Base\ System/System/Installation/Packages
cp -rp /Volumes/install_app/Packages /Volumes/OS\ X\ Base\ System/System/Installation/
cp -rp /Volumes/install_app/BaseSystem.chunklist /Volumes/OS\ X\ Base\ System/BaseSystem.chunklist
cp -rp /Volumes/install_app/BaseSystem.dmg /Volumes/OS\ X\ Base\ System/BaseSystem.dmg
hdiutil detach /Volumes/install_app
hdiutil detach /Volumes/OS\ X\ Base\ System/
hdiutil convert /tmp/Sierra.cdr.dmg -format UDTO -o /tmp/Sierra.iso
rm /tmp/Sierra.cdr.dmg
mv /tmp/Sierra.iso.cdr ~/Desktop/Sierra.iso


Assuming you have "Install macOS Sierra.app" in your /Applications directory, the above bash script can be used to create the ISO. Just paste this into a .sh script, then execute it in a shell.

"Sierra.iso" will appear in a few moments on your desktop.

The output in your shell will look something like this:
Code:
stux$ sh ./create-installer-iso.sh
/dev/disk2			 GUID_partition_scheme			
/dev/disk2s1		   EFI							
/dev/disk2s2		   Apple_HFS						 /Volumes/install_app
.......................................................................................................
created: /tmp/Sierra.cdr.dmg
/dev/disk3			 Apple_partition_scheme			
/dev/disk3s1		   Apple_partition_map			
/dev/disk3s2		   Apple_HFS						 /Volumes/install_build
	Validating target...done
	Validating source...done
	Retrieving scan information...done
	Validating sizes...done
	Restoring  ....10....20....30....40....50....60....70....80....90....100
	Remounting target volume...done
"disk2" unmounted.
"disk2" ejected.
"disk3" unmounted.
"disk3" ejected.
Reading Driver Descriptor Map (DDM : 0)…
Reading Apple (Apple_partition_map : 1)…
Reading disk image (Apple_HFS : 2)…
.......................................................................................................
Elapsed Time: 34.573s
Speed: 211.6Mbytes/sec
Savings: 0.0%
created: /tmp/Sierra.iso.cdr


and if you run
ls -l ~/Desktop/Sierra.iso

you should see something like this:
Code:
stux$ ls -l ~/Desktop/Sierra.iso
-rw-r--r--  1 stux  wheel  7671382016 25 Aug 19:07 /Users/stux/Desktop/Sierra.iso


Next, upload the Sierra ISO to a data store on your ESXi host.

Unlocking ESXi's macOS support

Secondly, ESXi has support for Mac OS X guests, but it needs to be "unlocked".

You can unlock this support for macOS guests using VMware Unlocker, which is a set of scripts to enable/disable the support for macOS guests in the various vmWare hypervisors, including ESXi.

The official forum thread is:
http://www.insanelymac.com/forum/files/file/339-unlocker/

The latest released version is 2.08, but the version you need for ESXi 6.5U1 is 2.09RC, which can be downloaded from the github repository:
https://github.com/DrDonk/unlocker

Just click the "Clone or download" button, then Download ZIP. Rename the zip to "unlocker-209RC.zip". On a mac it'll end up in the Trash after decompressing.

Upload the zip to your datastore...

The following instructions and cautions are from readme.txt included with vmware-unlocker.

7. ESXi
-------
You will need to transfer the zip file to the ESXi host either using vSphere client or SCP.

Once uploaded you will need to either use the ESXi support console or use SSH to
run the commands. Use the unzip command to extract the files.

<<< WARNING: use a datastore volume to store and run the scripts >>>

Please note that you will need to reboot the host for the patches to become active.
The patcher is embbedded in a shell script local.sh which is run at boot from /etc/rc.local.d.

You may need to ensure the ESXi scripts have execute permissions
by running chmod +x against the 2 files.

esxi-install.sh - patches VMware
esxi-uninstall.sh - restores VMware

There is a boot option for ESXi that disables the unlocker if there is a problem.

At the ESXi boot screen press shift + o to get the boot options and add nounlocker.

Note:
1. Any changes you have made to local.sh will be lost. If you have made changes to
that file, you will need to merge them into the supplied local.sh file.
2. The unlocker needs to be re-run after an upgrade or patch is installed on the ESXi host.
3. The macOS VMwwre tools are no longer shipped in the image from ESXi 6.5. They have to be
downloaded and installed manually onto the ESXi host. For additional details see this web page:

https://blogs.vmware.com/vsphere/2016/10/introducing-vmware-tools-10-1-10-0-12.html

Note: the instructions for disabling the unlocker if your ESXi should fail to boot. In my testing this did not happen.


In the ESXi shell, I cd to the datastore where I uploaded unlocker, unziped it, renamed the directory, then executed esxi-install.sh, and rebooted. It already had the right permissions.

See the transcript below:

Code:
[root@localhost:~] cd /vmfs/volumes/datastore1/downloads
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads] unzip unlocker-209RC.zip
Archive:  unlocker-209RC.zip
   creating: unlocker-master/
  inflating: unlocker-master/.gitattributes
  inflating: unlocker-master/.gitignore
  inflating: unlocker-master/dumpsmc.exe
  inflating: unlocker-master/dumpsmc.py
  inflating: unlocker-master/esxi-build.sh
  inflating: unlocker-master/esxi-config.py
  inflating: unlocker-master/esxi-install.sh
  inflating: unlocker-master/esxi-uninstall.sh
  inflating: unlocker-master/gettools.exe
  inflating: unlocker-master/gettools.py
  inflating: unlocker-master/license.txt
  inflating: unlocker-master/lnx-install.sh
  inflating: unlocker-master/lnx-uninstall.sh
  inflating: unlocker-master/lnx-update-tools.sh
  inflating: unlocker-master/local-prefix.sh
  inflating: unlocker-master/local-suffix.sh
  inflating: unlocker-master/local.sh
  inflating: unlocker-master/osx-install.sh
  inflating: unlocker-master/osx-uninstall.sh
  inflating: unlocker-master/readme.txt
  inflating: unlocker-master/smctest.sh
  inflating: unlocker-master/unlocker.exe
  inflating: unlocker-master/unlocker.py
  inflating: unlocker-master/win-install.cmd
  inflating: unlocker-master/win-test-install.cmd
  inflating: unlocker-master/win-uninstall.cmd
  inflating: unlocker-master/win-update-tools.cmd
   creating: unlocker-master/wip/
  inflating: unlocker-master/wip/__init__.py
  inflating: unlocker-master/wip/argtest.py
  inflating: unlocker-master/wip/extract-smbiosdb.py
  inflating: unlocker-master/wip/generate.py
  inflating: unlocker-master/wip/smbiosdb.json
  inflating: unlocker-master/wip/smbiosdb.plist
  inflating: unlocker-master/wip/smbiosdb.txt
  inflating: unlocker-master/wip/unlocker-asmpatch.diff
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads] mv unlocker-master unlocker-209RC
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads/unlocker-209RC] ls
dumpsmc.exe		   esxi-install.sh	   license.txt		   local-prefix.sh	   osx-uninstall.sh	  unlocker.py		   win-update-tools.cmd
dumpsmc.py			esxi-uninstall.sh	 lnx-install.sh		local-suffix.sh	   readme.txt			win-install.cmd	   wip
esxi-build.sh		 gettools.exe		  lnx-uninstall.sh	  local.sh			  smctest.sh			win-test-install.cmd
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads/unlocker-209RC] ./esxi-install.sh
VMware Unlocker 2.0.9
===============================
Copyright: Dave Parsons 2011-16
Installing local.sh
Adding useVmxSandbox
Saving current state in /bootbank
Clock updated.
Time: 09:28:21   Date: 08/25/2017   UTC
Success - please now restart the server!
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads/unlocker-209RC] reboot


NOTE: If you need to update ESXi, run the esxi-uninstall.sh script, update, then run the esxi-install.sh script again.

Installing VMware tools for macOS into ESXi

Once unlocker is installed, and vmware is safely rebooted, you need to install the macOS vmware tools.

VMware's macOS vmware tools iso is no longer included in the base installation, you need to download it separately, and load it into ESXi.

At the time of writing, the latest version is 10.1.10,
https://my.vmware.com/group/vmware/details?downloadGroup=VMTOOLS10110&productId=614

You want to download the "VMware Tools packages for FreeBSD, Solaris and OS X". I chose the VMware-Tools-10.1.10-other-6082533.zip zipped version, over the tgz.

Again, upload to your datastore.

Then ssh into the ESXi shell. cd to the datastore (ie in the /vmfs/volumes directory), and unzip.

unzip VMware-Tools-10.1.10-other-6082533.zip

Inside the unziped directory, is a vmtools directory, and inside that is "darwin.iso", this needs to be copied to
/usr/lib/vmware/isoimages/darwin.iso

My transcript:
Code:
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads] cd VMware-Tools-10.1.10-other-6082
533/vmtools/
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads/VMware-Tools-10.1.10-other-6082533/vmtools] ls
darwin.iso					freebsd.iso.sha			   solaris.iso.sig
darwin.iso.sha				freebsd.iso.sig			   solaris_avr_manifest.txt
darwin.iso.sig				solaris.iso				   solaris_avr_manifest.txt.sig
freebsd.iso				   solaris.iso.sha
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads/VMware-Tools-10.1.10-other-6082533/vmtools] cp darwin.iso /usr/lib/vmware/isoimages/darwin.iso
[root@localhost:/vmfs/volumes/599a6db2-d2f3f71f-617a-ac1f6b10ec96/downloads/VMware-Tools-10.1.10-other-6082533/vmtools] ls -l /usr/lib/vmware/isoimages/darwin.iso
-rwx------	1 root	 root	   3561472 Aug 25 10:17 /usr/lib/vmware/isoimages/darwin.iso


Creating a macOS Sierra VM


Next create a VM. Give it enough cores/ram for Sierra, thin provision the disk if you want, SCSI controller set to Parallel is optimal, select the Sierra ISO for the CDRom, Select the right VM Network for the network controller, select USB3.0 for USB controller.

ESXi only supports the E1000 network adapter for macOS.

Screen Shot 2017-08-25 at 7.54.01 PM.png



And then there is one very important thing to do in the VM options pane.

Go to Advanced section, then click the "Edit Configuration..." button
Screen Shot 2017-08-25 at 7.54.12 PM.png


Then "Add parameter" for "smc.version" with value of "0"
Screen Shot 2017-08-25 at 7.54.50 PM.png

Note: smc.version is set to 0

Without this parameter, the VM will fail to boot.

Now you can save the VM and start it up. It should boot into the Sierra installer.

Screen Shot 2017-08-25 at 7.58.10 PM.png

Select your language

Then on the next screen, select Disk Utility from the Utilities menu
Screen Shot 2017-08-25 at 7.58.59 PM.png


Once Disk Utility loads, select the VMware Virtual disk and click the Erase button to erase it. The defaults are correct.
Screen Shot 2017-08-25 at 8.01.04 PM.png


Then quit Disk Utility.

Next continue and install Sierra onto the disk you initialised previously.

After an eternity it should finish installing and begin a countdown to restart. Choose Shutdown from the Apple menu. If you miss it, just shutdown anyway.

Next edit the VM, change the CD ISO file. Navigate to vmimages/tools-isoimages/darwin.iso

Screen Shot 2017-08-25 at 8.25.54 PM.png


Save, and then power-on the VM.

Go through the post-install configuration setup as per normal.

Once at the Desktop, Install VMware tools.

Screen Shot 2017-08-25 at 8.32.01 PM.png


It'll force you to restart.

After it restarts... shutdown.

Edit the VM, and remove the CD drive.

Save.

And you're done :)

Now you can install all the macOS software and servers/services you want to.

Why?


I wanted to virtualize away a pre-existing macOS server, since a lot of its functionality has been taken over by FreeNAS. I no longer need its Time Machine serving capabilities, or its SMB/AFP serving capabilities, but macOS Server does provide a useful caching service for Macs, iPhones, iPads, AppleTVs, etc on your network.

All software updates, store downloads, media, icloud data etc downloaded by any Apple device on the network will be through-cached on the server. Even music/video.

Here's one I prepared earlier:
Screen Shot 2017-08-25 at 9.22.28 PM.png


Another reason to install macOS Server would be to use the Xcode Server...

Aside from macOS Server, you might want to...

Host an AirPrint gateway...

Host other mac only servers...

And perhaps just to test software on legacy versions of macOS.

With everything backed by FreeNAS/ZFS hosted iSCSi or NFS shares.

If you want to, you can also install GlobalSAN iSCSI directly in the macOS instance, add the Storage Network as a second vNIC and have the mac directly connect to an HFS+ zvol (ZFS backed) via iSCSI.

You can back up at 20gbps to a Time Machine share on your FreeNAS instance.

If you have a need to virtualize macOS, this is how to do it.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Using a High Performance PCIe NVMe SSD for SLOG/L2ARC and Swap.

If you use your NFS and iSCSI mounts for VMware, you will probably find that performance is substandard without a high-performance SLOG.

In my tests with NFS (sync=standard) and iSCSI (sync=always), I found sequential performance of just 5MB/s inside a VM! This is because VMware will write all writes to NFS datastores in sync mode, and for VM consistency this is probably the right decision on VMware's behalf. And if you want data consistency on your iSCSI hosted VMs, you should force sync there too.

https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

One approach to resolving this is simply to disable sync on NFS volumes, and not to force sync on iSCSI, and that might be a practical solution, but will leave your VMs exposed to corrupted data in the event of a crash.

If you want consistent VM disks, then you will want to use a SLOG, or put up with abysmal write performance. And if you want to use a SLOG then it better be a SLOG with Power Loss Protection (PLP), otherwise there is not much point.

Also, a slog should have very fast and consistent sequential write capabilities.

One of the cheapest options is to use a SATA SSD with PLP, say an Intel S3500 or S3700. This is not necessarily performant, and also is not possible in this build without adding an additional SATA/SAS controller, which I did not want to do.

Benchmarks comparing P3700, ZeusRAM and S3700: (high end SSD, Ram and SATA drives with PLP), P3700 wins.
https://forums.servethehome.com/index.php?threads/zeusram-vs-intel-p3700.13159/
https://forums.servethehome.com/index.php?threads/whiteys-freenas-zfs-zil-testing.16736/

Benchmarks comparing S3700 and S3500 (SATA options)
https://b3n.org/zil-ssd-benchmarks-virtual-freenas/

STH P4800x as SLOG tests (skip to "Real-World Intel Optane in ZFS ZIL/SLOG and L2ARC Scenarios")
https://www.servethehome.com/intel-optane-hands-on-real-world-benchmark-and-test-results/
What we can say is this, Optane (Intel DC P4800X) is now the technology to get for a SLOG/ ZIL device if you just want maximum performance outside of something exotic (e.g. NVDIMMS.) The Intel DC P3700 is still great in this role, and is relatively inexpensive, even when overprovisioned for logging device duties but we were getting incrementally faster results with the Optane drive. Since the leading practice is to mirror ZIL / SLOG devices there is a major cost factor in getting two AIC form factor Intel DC P4800X Optane drives and the P3700 is significantly less expensive.

STH Top Picks for SLOG drives:
https://www.servethehome.com/buyers...as-servers/top-picks-freenas-zil-slog-drives/


I had planned to use a PCIe M.2 carrier card to allow additional M.2 drives to be added, and I was hoping that a suitable M.2 22110 PCIe NVMe disk would be available, but I couldn't source one with the right performance capabilities, or endurance... perhaps that will change in the near future.

So, I resolved to use a PCIe HHHL AIC SSD (ie an SSD on a PCIe card) instead, at least in the meantime. After looking at all the available Intel options, the only viable candidates were the Intel P3700, Intel P4800x and the Intel 750 SSDs.

The P4800x is ludicrously expensive, and the Intel 750 has poor endurance, and would probably not last past a year. The P3700 is relatively well priced, has awesome endurance (10DWPD ie 4TB/day) and a 5 year warranty, and its specifications should exceed the capabilities of this system. Its the goldilocks option.

So I went with a P3700 400GB, by justifying that I'd pull it out for use in my larger system when I commissioned it.
IMG_1260.jpg


Its a fantastic drive.

In my testing, performance with the P3700 is similar to using a ram disk! So I'm very happy with its performance. In fact, although I had planned to use the P3700 in my larger ESXi build, I'm so happy with its performance in this small system, that I'm debating leaving it in there, even though it is extreme overkill.

So, as part of doing some extensive testing, I've determined the best thing to do is to set the drive to 4K native sectors, and to Over Provision it to roughly 25%. The over-provisioning ensures that the drive has plenty of spare sectors to work with to perform wear levelling, as well as to maintain consistent write performance.

But after even more testing, the drive has so much performance on tap, that I determined I could use the drive for L2ARC as well as SLOG, with no practical performance deterioration. And I might as well use it as a fast swap drive as well. Since swap is never used much, it won't affect performance, but it will mean I can disable swap on my HDs, and thus will protect myself from kernel crashes when the HDs fail, which has been a problem I've run into a number of times.

Although I originally had SWAP and L2ARC as Virtual Disks, putting the SWAP and L2ARC on the PCIe NVMe has the benefit of carrying over to Bare-metal FreeNAS.

So, the next thing is to decide how to allocate the partitions, and how to over-provision this drive.

Deciding how much over-provisioning to use

https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/
Choosing a SLOG device: OpenZFS aggregates your writes into “transaction groups” which are flushed to their final location periodically (every 5 seconds in FreeNAS & TrueNAS). This means that your SLOG device only needs to be able to store as much data as your system throughput can provide over those 5 seconds. Under a 1GB connection, this would be about 0.625GB. Correspondingly a 10GB connection would require 6.25GB and 4x10GB would require 25GB. This means latency, rather than size is your main consideration in choosing a device.

With SLOG, You only need room for a single transaction group, which by default is 5 seconds. Since I've benchmarked the VMware subsystem at about 2GB/s, that means I only need a 10GB SLOG max. So we'll use a 20GB slog partition, because overkill.

I've already done some testing with L2ARC using Virtual Disks, and I've found I saw performance degradation with an L2ARC > 64GB with the current memory allocation, so I'll use 64GB for L2ARC. I can always vary this in future, or just stripe in a virtual disk again.

And, I think 16GB of swap should be plenty.

And 16 + 20 + 64 = 100GB. And I like round numbers.

When we Over Provision, we specify the OP in GB. But when we later partition using gpart it will be specified in GiB. 100GiB = about 107.5GB.

So, we'll OP to 108GB (out of 400GB), ie 27%

Configuring an Intel Datacentre SSD to use Native 4K sectors

Native 4K sectors are more performant than 512b sectors... and since the drive supports it, lets do it.

You configure the drive using the Intel SSD Data Centre Tool, or ISDCT. The tool runs under enterprise flavours of Linux, or Windows. I prefer Windows. So I installed a Windows VM, passed through the drive into the VM (remembering to reserve all the memory because of PCIe pass-through), and then booted Windows.

In order to change the LBA size, the drive has to be blank. If you've already used it in FreeNAS (or some other platform), you should erase it. Perhaps use gpart destroy

ISDCT needs to be run from an Administrative command prompt

In order to see what Intel SSDs are available for use with isdct:
isdct show -intelssd

Screen Shot 2017-08-28 at 5.07.30 PM.png


If you haven't already, and your SSD's firmware is out of date, you should update it. You can use ISDCT to do that. See the manual.

In the above screenshot, you can see "Index : 0". You need to use the Index with all the below commands.

The following command will secure erase, and format Intel SSD #0 with 4K native LBA sectors.
isdct start -intelssd 0 -NVMeFormat LBAformat=3 SecureEraseSetting=1

"start" we'll be executing an operation
"-intelssd 0" means use Intel SSD #0. If you have multiple Intel SSD's on your system, you need to specify the right one!
"-NVMeFormat" is the operation we're performing
"LBAformat=3" is the option for 4Kn sectors.
"SecureEraseSetting=1" ensures the drive's performance is reset to FOB (Fresh Out of Box) before we OP.

Screen Shot 2017-08-28 at 4.41.07 PM.png


Intel's instructions for the above procedure are here

Over-provisioning an Intel SSD using ISDCT


Over-provisioning an SSD is a technique used to limit how much of the SSD is used for data, thus providing more spare flash area for the controller to wear level and usually provide more write performance. Using ISDCT it is very easy to OP an Intel SSD.

To OP a 400GB IntelSSD at Index 0:

...to 20GB (ie 380GB locked away)
isdct.exe set -intelssd 0 MaximumLBA=20GB

...to 50% (100% would be no OP)
isdct.exe set -intelssd 0 MaximumLBA=50%

...and to restore to native,
isdct.exe set -intelssd 0 MaximumLBA=native

In this case, I'm going to OP to 108GB, as previously discussed.

isdct set -intelssd 0 MaximumLBA=108GB
Screen Shot 2017-08-28 at 4.52.15 PM.png


Next, I'll shutdown the Windows VM, re-attach the SSD to the FreeNAS VM, and reboot the FreeNAS VM. This will have the effect of power-cycling the SSD as requested in the picture above.


Partitioning a PCIe NVMe SSD for Swap/SLOG and L2ARC

I'm planning on partitioning my 108GB OP'd SSD in 16GiB of swap, 20GiB of SLOG and the remainder (circa 64GiB) as L2ARC.

You can confirm which disk your PCIe NVMe is by using FreeNAS GUI View Disk option. In my case, its nvd0.

So, to create the GPT table and the three partitions (swap/slog/l2arc)

gpart create -s gpt nvd0
gpart add -i 1 -b 128 -t freebsd-swap -s 16g nvd0
gpart add -i 2 -t freebsd-zfs -s 20g nvd0
gpart add -i 3 -t freebsd-zfs nvd0

And finally

glabel status | grep nvd0

To see the gptids of the 3 partitions you created.

That will look something like this:

Code:
root@chronus:~ # gpart create -s gpt nvd0
nvd0 created
root@chronus:~ # gpart add -i 1 -b 128 -t freebsd-swap -s 16g nvd0
nvd0p1 added
root@chronus:~ # gpart add -i 2 -t freebsd-zfs -s 20g nvd0
nvd0p2 added
root@chronus:~ # gpart add -i 3 -t freebsd-zfs nvd0
nvd0p3 added
root@chronus:~ # glabel status | grep nvd0
gptid/98fec082-8bc1-11e7-b163-000c2987fb12	 N/A  nvd0p1
gptid/9edb6857-8bc1-11e7-b163-000c2987fb12	 N/A  nvd0p2
gptid/a2b0a8ca-8bc1-11e7-b163-000c2987fb12	 N/A  nvd0p3
root@chronus:~ #


You will be needing the three gptids from the glabel status for the steps below.

Configuring swap to replace FreeNAS' RAID0 striped swap.

The swap will activate automatically the next time you restart FreeNAS, but assuming you want to now disable your HD based swap, I would follow the instructions on How to relocate swap to an SSD or other partition, specifically, I would add a post-init command with the below contents

Code:
swapoff -a ; grep -v -E 'none[[:blank:]]+swap[[:blank:]]' /etc/fstab > /etc/fstab.new && echo "/dev/gptid/98fec082-8bc1-11e7-b163-000c2987fb12.eli none swap sw 0 0" >> /etc/fstab.new && mv /etc/fstab.new /etc/fstab ; swapon -a


where the gptid is the gptid for the swap partition, ie, nvd0p1, and, if you want, you can run that command in the shell immediately too. If you do run the command immediately, you can confirm everything is working by issuing swapinfo, you should see that the only swap device is now a gpt device.

Adding the SLOG/L2ARC partitions to your pool

Next we add the SLOG...

zpool add tank log gptid/9edb6857-8bc1-11e7-b163-000c2987fb12
where "tank" is the name of your pool, and "gptid/9edb6857-8bc1-11e7-b163-000c2987fb12" is the gptid of your slog partition (ie nvd0p2).

And then the L2ARC...

zpool add tank cache gptid/a2b0a8ca-8bc1-11e7-b163-000c2987fb12
again, tank is your pool, and the gptid should be the gptid of your L2ARC partition (nvd0p3).

You can confirm the log and cache devices have been added with zpool status tank (where "tank" is your pool name)

And the whole process will look something like this:

Code:
root@chronus:~ # swapoff -a ; grep -v -E 'none[[:blank:]]+swap[[:blank:]]' /etc/fstab > /etc/fstab.new && echo "/dev/gptid/98fec082-8bc1-11e7-b163-000c2987fb12.eli none swap sw 0 0" >> /etc/fstab.new && mv /etc/fstab.new /etc/fstab ; swapon -a
swapon: adding /dev/gptid/98fec082-8bc1-11e7-b163-000c2987fb12.eli as swap device
root@chronus:~ # swapinfo
Device		  1K-blocks	 Used	Avail Capacity
/dev/gptid/98fec082-8bc1-11e7-b  16777216		0 16777216	 0%
root@chronus:~ # zpool add tank log gptid/9edb6857-8bc1-11e7-b163-000c2987fb12
root@chronus:~ # zpool add tank cache gptid/a2b0a8ca-8bc1-11e7-b163-000c2987fb12
root@chronus:~ # zpool status tank
  pool: tank
state: ONLINE
  scan: scrub repaired 0 in 43h26m with 0 errors on Mon Aug 21 11:54:23 2017
config:
	NAME											STATE	 READ WRITE CKSUM
	tank											ONLINE	   0	 0	 0
	  raidz2-0									  ONLINE	   0	 0	 0
		gptid/584c08b1-7698-11e7-88b3-000c2987fbfe  ONLINE	   0	 0	 0
		gptid/847418a9-769b-11e7-88b3-000c2987fbfe  ONLINE	   0	 0	 0
		gptid/fc3d228f-769c-11e7-88b3-000c2987fbfe  ONLINE	   0	 0	 0
		gptid/ea5cc7a1-769c-11e7-88b3-000c2987fbfe  ONLINE	   0	 0	 0
		gptid/282edd2e-769b-11e7-88b3-000c2987fbfe  ONLINE	   0	 0	 0
		gptid/b333ca61-7694-11e7-88b3-000c2987fbfe  ONLINE	   0	 0	 0
	logs
	  gptid/9edb6857-8bc1-11e7-b163-000c2987fb12	ONLINE	   0	 0	 0
	cache
	  gptid/a2b0a8ca-8bc1-11e7-b163-000c2987fb12	ONLINE	   0	 0	 0

errors: No known data errors
root@chronus:~ #


And if you confirmed with view status in FreeNAS

Screen Shot 2017-08-28 at 5.37.20 PM.png


And that is that.

Screen Shot 2017-08-03 at 8.12.00 PM.png
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
How to fix the ESXi Hardware Sensor display on Xeon D boards

VMware has a neat hardware sensor display which shows you the IPMI sensors. This can save you from having to login to your IPMI page to check things like your CPU temperature, or fan speeds. Quite useful when testing fan control scripts ;)

Host -> Monitor -> Hardware -> System Sensors

The problem is that by default it only shows the raw output... which is not quite right.

Screen Shot 2017-08-29 at 9.15.59 PM.png


Fixing it is quite easy. Shutdown your VMs, and then run the following commands in the hypervisor shell.

esxcli system wbem set --ws-man false
esxcli system wbem set --enable true
reboot

And then after the reboot, everything should be good
Screen Shot 2017-08-29 at 9.25.49 PM.png


More Information: https://tinkertry.com/fix-xeon-d-inaccurate-cim-data-default-in-vsphere65
 
Joined
Feb 2, 2016
Messages
574
This is a fantastic resource, @Stux. Thank you!

Cheers,
Matt
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Sorry if I missed it, and I know you did custom fan control work on this, but I was wondering how this case was doing at keeping the drive temps under control?
Overall assessment of the system?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Sorry if I missed it, and I know you did custom fan control work on this, but I was wondering how this case was doing at keeping the drive temps under control?
Overall assessment of the system?

After upgrading the fans, the thermals are good. Drive temps well below 40 under load, main fans at 30% and virtually silent at idle.

CPU stock fan runs at about 50% most of the time. Where it's not objectionable. If after it goes into the living room I don't like that fan, I'll replace it.

Max CPU temp under mprime is about 73C

The system has met all the goals. 16 thread ESXi power house. 10gbe FreeNAS. Living room compatible.

I'm in the process of testing its 10gbe capabilities to another 10gbe device... but I'm waiting on said device.

Will write up the pfSense install eventually.

Have yet to install Plex

Still need to install a docker machine, but will probably use ubuntu under ESXi for that
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
After upgrading the fans, the thermals are good. Drive temps well below 40 under load, main fans at 30% and virtually silent at idle.

CPU stock fan runs at about 50% most of the time. Where it's not objectionable. If after it goes into the living room I don't like that fan, I'll replace it.

Max CPU temp under mprime is about 73C

The system has met all the goals. 16 thread ESXi power house. 10gbe FreeNAS. Living room compatible.

I'm in the process of testing its 10gbe capabilities to another 10gbe device... but I'm waiting on said device.

Will write up the pfSense install eventually.

Have yet to install Plex

Still need to install a docker machine, but will probably use ubuntu under ESXi for that
Thank you for all this great information.
 

Dog

Cadet
Joined
Aug 19, 2017
Messages
6
I'm really enjoying trying to mirror your build as a learning experience. I still have a lot of content to work through, but I'm learning a lot and I'm looking forward to the next parts (Pfsense, Plex, etc.) I really appreciate it.
 

JJDuru

Dabbler
Joined
Nov 29, 2014
Messages
19
Santa Stux,

You made me come out of my lurking ways and really say thank you for all this Thanksgiving/Christmas goodies - as in cannot say thank you enough for the build info. Every single time I land on it, I feel like it's Christmas yet again.

I'll promise to go by your book and never lose my way again.

Yours,

JJ
 
Last edited:

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
@Stux
Awesome findings and manual. Thx for that.

I was playing around with an Intel 750 400GB. It has awesome performance and the system becomes much more faster/responsive, but I'm a bit afraid of its endurance. So it will stay in the small system for long-term observation.
For the productive system I will buy a DC P3700. Thanks to your manual, I will use it as an SLOG for 2 datastores.
  • 8x 2TB WD RE4
  • 6x 1TB WD Red
all drives are mirrored vdevs and then striped (the RAID10 likely thing). only serving iSCSI (sync=always) via (currently 4x 1 GBit) MPIO.

Because you are using your P3700 for swap, SLOG and L2ARC, I think it will be capable of 2x SLOG. I will cut it to 25% of space and use 2x 20GB partitions.
The system has about 96GB of RAM and an ARC hit ratio of <40%. I think I don't need any L2ARC...

Will report back, when the card is installed (estimated shipping is 3 weeks).
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
When you have it up and running you can use gstat to check how busy it is.

L2arc and swap make very low demands of the drive, where as the slog uses up to 75% of its available IOPS.

Two SLOGs might saturate it’s capacity. But better than nothing. And only if it happens together.

So give it a try and monitor it. If you see sustained 90-100% busy then you’re probably saturating the slog.
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
When you have it up and running you can use gstat to check how busy it is.

L2arc and swap make very low demands of the drive, where as the slog uses up to 75% of its available IOPS.

Two SLOGs might saturate it’s capacity. But better than nothing. And only if it happens together.

So give it a try and monitor it. If you see sustained 90-100% busy then you’re probably saturating the slog.

This is what I'm currently doing - just watching at the stats on the test system. At the end of the week I will move the 750 as 2x SLOG to the big system and let it run until the P3700 will arrive.
Of course I will report back with some results ;)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
zpool iostat [-v] [<pool>] [<refresh interval seconds>]

Is a very cool command too.
 

GBillR

Contributor
Joined
Jun 12, 2016
Messages
189
WOW.

I can't possibly thank you enough for your detailed documentation of this build. I recently took the leap and put together an all in one ESXi server, and wish that this comprehensive thread had been available to me when I was scouring the forum and the web for information along the way.

Your meticulous note keeping, step-by-step details and pictures have provided me with solutions to a few of the roadblocks I ran into. Especially with regards to the ESXi networking setup and the VM settings.

This belongs in the resource section for sure.

Thank you Sir!
 

Sisyphe

Dabbler
Joined
Nov 8, 2017
Messages
11
I'm new to Freenas / ESXi and I must say this is probably the more useful, detailed and comprehensive guide I've seen so far, thank you Stux!

I'm currently working on a similar build based on the same motherboard, but with a U-NAS 801A case and M1015 card. Hence the PCIe slot is not available for the Intel SLOG SSD..

Any other alternative you can recommend for SLOG, using either the M.2 slot or SATA? What will be the downside in term of performances?

Thanks!
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
System specs see signature

1x Intel SSD 750 400GB PCIe (MaximumLBA=25%, LBA 4K)
2x SLOG (20GB each, 60GB free)
approx 100-150GB writes per day

serving several ZVOLs (16Kn no dedup, no compression)
via iSCSI (sync=always)
to 2x ESXi 6.5 (RR, IOPS=1)
with 3x 1Gb NIC (MPIO)

I think, performance is really impressive and if there would be more MPIO paths, I could max it out...

there is some room for optimization, but a short test:
(yeah, I know... these "performance test tools" are not really representing the daily usage stats)

Q32 T1
Q32-T1.png



Q32 T4
Q32-T4.png


will report back, when the DC P3700 will be delivered. shipping needs some more (3+) weeks.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
I'm new to Freenas / ESXi and I must say this is probably the more useful, detailed and comprehensive guide I've seen so far, thank you Stux!

I'm currently working on a similar build based on the same motherboard, but with a U-NAS 801A case and M1015 card. Hence the PCIe slot is not available for the Intel SLOG SSD..

Any other alternative you can recommend for SLOG, using either the M.2 slot or SATA? What will be the downside in term of performances?

Thanks!

There are plenty of suitable Sata drives, which you can connect to the mobo data ports, downside is that performsnce will not be as good as the p3700

You’re looking at drives like Intel s3600 I believe
 

Sisyphe

Dabbler
Joined
Nov 8, 2017
Messages
11
There are plenty of suitable Sata drives, which you can connect to the mobo data ports, downside is that performsnce will not be as good as the p3700

You’re looking at drives like Intel s3600 I believe

Thank you Stux. After further investigation, I identified 2 possible options:

1) Use for SLOG an Intel Optane 900p 280Gb SSD in U.2 format, and a M.2 PCIe port to mini-SAS SFF-8643 adapter such as this one. VMs will be stored on a SSD connected to a motherboard SATA port.

2) Use a SATA SSD for SLOG (such as the Intel S3600 you are mentioning), and a NVMe M.2 SSD such as the Samsung 960 EVO for VMs.

I believe 1) should work and will be significantly faster than 2) overall and especially for write-intensive tasks to the Freenas storage. What do you think?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Thank you Stux. After further investigation, I identified 2 possible options:

1) Use for SLOG an Intel Optane 900p 280Gb SSD in U.2 format, and a M.2 PCIe port to mini-SAS SFF-8643 adapter such as this one. VMs will be stored on a SSD connected to a motherboard SATA port.

2) Use a SATA SSD for SLOG (such as the Intel S3600 you are mentioning), and a NVMe M.2 SSD such as the Samsung 960 EVO for VMs.

I believe 1) should work and will be significantly faster than 2) overall and especially for write-intensive tasks to the Freenas storage. What do you think?

I like the idea to use the u2 900p

Btw, currently an issue with 900p ESXi and FreeBSD
https://bugs.freenas.org/issues/26508
 
Last edited:
Top