Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Stux, thanks so much for this guide about SLOG devices. I will configure a new Intel P3700 shortly and will stick to this guide closely. However, just to be sure, I would like to ask one question:

May I skip your chapter "Partitioning a PCIe NVMe SSD for Swap/SLOG and L2ARC", when P3700 is used as SLOG device only?

After OP of P3700 by using Intel Utility (I think I will reduce capacity to ~ 25 GByte), I would like to add P3700 as slog device to a zpool using the WebGUI Volume Manager of FreeNAS. Is it necessary to create any partitions using FreeNAS shell prior using the Volume Manager?
I guess the corresponding partition will be created automatically when the SLOG is added to the zpool when using volume manager?

Once again: thanks so much!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
No need to partition if you just want to use a SLOG. Would recommend using the intel tool to OP though.
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
@Scampicfx
if the P3700 won't be stressed to its max by your system, you could use 2+ partitions for something else. I use 100GB for 2x 20GB SLOG (2 datastores). While it is not stressing the NVMe I have 60GB room for something else.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Can always remove the slog from the pool and repartition again later
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
Can always remove the slog from the pool and repartition again later
of course. but when only a 25GB device is visible to FreeNAS, it's only 25GB.
IMHO it's good to have some spare space on the device for dynamic usage. A 100GB device partitioned with 25GB of SLOG and 75GB free space is a more flexible one.

if the usage via GUI is the primary goal, then you shouldn't go with the manual partitioning
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
of course. but when only a 25GB device is visible to FreeNAS, it's only 25GB.
IMHO it's good to have some spare space on the device for dynamic usage. A 100GB device partitioned with 25GB of SLOG and 75GB free space is a more flexible one.

if the usage via GUI is the primary goal, then you shouldn't go with the manual partitioning

You can also reconfigure the OP.
 

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Thanks, that was helpful for decision-making! :) I will try slog-only for the beginning. Maybe I will repartition / reconfigure at a later point, if deemed necessary :)
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
finally, the Intel DC P3700 arrived :cool:
overall performance is like the 750. but this is more or less because of poor 3x 1GB iSCSI connection.

for now I did
OP to 25% (~100GB/400GB)
4K sectors
newest firmware

Code:
gpart create -s gpt nvd0
gpart add -i 1 -b 128 -t freebsd-zfs -s 20g nvd0
gpart add -i 2 -t freebsd-zfs -s 20g nvd0
gpart add -i 3 -t freebsd-zfs -s 20g nvd0

nvd0p1 + nvd0p2 = SLOG for datastores

experimental :D
Code:
zpool create -m /mnt/nvme-1 nvme-1 <gptid of nvd0p3>
zpool export nvme-1

also there is a datastore with the 3rd partition, performance is great. don't know what the datastore will be good for... perhaps some place to store shared ESXi stuff
 

hackbeard

Cadet
Joined
Nov 11, 2017
Messages
2
first i would like to say hails to this extraordinary tutorial to you Stux, simply great work man!!!
i just bought myself a custom supermicro server with xeon 1528, 32gb ram, m2 ssd for esxi and intel p3700 for zil/slog from a local distributor.
nearly the same setup you introduced here.
already got the host up and running, now stepping forward to go on with the freenas vm.

i got one question here:
i would like to buy a UPS so the machine is safe when electricity says byebye (like some days ago, what a pity..).
can you tell me what's best to buy here and how i can configure it so the vm's and the host are getting safely shut down?

thanks a lot in advance and again cheers for all your big effort here!
 
Last edited:

usergiven

Dabbler
Joined
Jul 15, 2015
Messages
49
This is a fanfreakingtastic thread, great walk through Stux! I have been salivating over the D-1541 ever since I started reading about it so it's awesome to see it put together with pictures no less! I bet that thing runs quite a swath of VMs without breaking a sweat.
i got one question here:
i would like to buy a UPS so the machine is safe when electricity says byebye (like some days ago, what a pity..).
can you tell me what's best to buy here and how i can configure it so the vm's and the host are getting safely shut down?
Even though this isn't really a freenas question, I do believe losing power to your Freenas machine even when virtual should be avoided. I have esxi 6.5 running freenas among some other VMs that are important for my home environment. I knew I needed something (APCUPSD) to monitor powerloss events and be flexible enough to send instructions to the ESXI host at just the right time, invoking what is called a "graceful shutdown" of the VMs along with the host in a specific order. It took me a while to find something that worked with my setup. I found the answer/walk through by MrMajestyk to be the one that works for me: https://serverfault.com/questions/462993/vmware-esxi-shutdown-triggered-by-apc-ups-connected-via-usb

On point 6, make sure you run that initial putty connection to your esxi host as root. In my setup, I'm running Ubuntu Server so I used sudo before the command.

Key variables:
UPS model must have USB ability, plugged into the host. I have the APC Back-UPS 550G
ESXI must have SSH enabled under Host-Manage-Services, right click TSM-SSH, click start and policy set to start and stop with the host.
VMware tools or VM tools must be installed on each VM (that you care about) so they can properly shutdown when invoked.
Autostart order must be set for each VM so when the shutdown occurs, the correct reverse order happens accordingly.
More on Autostart, there's a system policy setting in Host-Manage-System-Autostart that lets you set a Stop Action, pick "Shut Down" and make sure all your VMs use "system default".

I edited the /etc/apcupsd/apcupsd.conf with the follow additional variable changes
ONBATTERYDELAY 60
BATTERYLEVEL 25
MINUTES 6
TIMEOUT 30

This waits 60 seconds before doing anything after the UPS notices a power loss, the next three variables all trigger shutdowns in one way or another so it's overkill but whatever, 25% battery level left, 6 minutes of juice left or after 30 seconds it sends the shutdown code.

Hope that helps!
 

Stryf

Dabbler
Joined
Apr 3, 2016
Messages
19
If you buy a CyberPower UPS then check out TinkerTry's article(s) about CyberPower as he has a guide for setting up a virtual appliance that the manufacturer has readily available for download and importing into ESXI. It worked with my consumer purchased ~800VA UPS.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Stux, I was directed here based on some questions I had around building my own AIO box, and must say this thread has blown me away with how detailed and thorough it is.

I am struggling to find the most appropriate SLOG drive for my particular setup however. I have an X9SRL-F, which has no M.2 or U.2 ports. As such, my options are currently SATA based Intel SSDs such as the S3700 https://www.ebay.co.uk/itm/Intel-DC...e=STRK:MEBIDX:IT&_trksid=p2060353.m1438.l2649 or looking for an M.2 form factor drive on a PCIe AIC.

I was wondering, has anyone come across any M.2 drives that support PLP and would be appropriate for use in this scenario? Perhaps I am wasting my time considering this as an option, but passing through my entire SATA controller on the motherboard just for a single SATA device for FreeNAS seems almost wasteful.

Assume this is also no real issue putting ESXi and the FreeNAS VMDKs on a standard SATA SSD (again no M.2 so can't use the fancy 960 Evo like you)

Cheers
Eds
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Intel now has m.2 drives that support PLP, Ie p4501, but they are 22110 form factor. Should fit in adapter cards.

Asus also has a quad m2 adapter card

The p900 would make a good slog, but last I checked there were compatibility issues with ESXi pass-through to FreeNAS

STH recently reviewed slog options, but note, they have not tested the new 4th gen SSDs (Ie p4XXX)

https://www.servethehome.com/exploring-best-zfs-zil-slog-ssd-intel-optane-nand/
 
Last edited:

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Very interesting reading!

While the newer optane 900p and p4000 series drives are way out of my price range for the intended use, the Intel and Samsung NVMe SSDs do look like a very good proposition sitting in the middle of the pack. I can pick up a PM953 for about £150, and should have enough performance for my relatively small scale deployment.

The Intel M.2 32GB Optane devices are also interesting.
I'm curious now, if I had a 4 port M.2 to PCIe adapter, can each drive be passed through to FreeNAS seperately or does the entire adapter pass through as one device?

Also, is it advisable to have one SLOG device per pool, or could you just use one device with seperate SLOG partitions?
I'm thinking it might be worth getting a PCIe adapter, a couple of M.2 Intel Optane modules for lower performance pools, then also get a 960 evolution for ESXi and a PM953 for high performance pool?
Aware might to split across adapters depending on available PCIe lanes.

Does that make sense?

Eds
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
In order to use those quad cards you need a motherboard which supports PCIe bifurcation (the X10SDV I use, does), if the slot is bifurcated, then it appears as 4 separate devices, and I believe each can be passed through separately.

If your motherboard does not support bifurcation, then you need a much more expensive card with PCIe switches on it.

You need to be careful on your order of operations when messing with pass-through, because if a device goes away,then it can change the device ordering, and then ESXi can get confused.

You can use one device with separate slog partitions. It basically comes down to a performance compromise as a SLOG device can only support so much IO, but as long as you understand that, and especially if its unlikely for all pools to be slammed at the same time, then it should be fine.

So, since you're talking about lower performance being satisfactory, then I think you should be fine.

I checked the busyness of my SLOG device when playing with this stuff, and it never went above 75%, which meant it had spare resources for use as an L2ARC.

Re: the PM983, the issue is that you can't update the firmware of an OEM only drive. https://forums.servethehome.com/index.php?threads/pm863-fw-file.18084/

The little 32GB Optane might actually be a good option. Its performance is not stellar, but then again, it does beat all the SATA options AFAIK.
 
Last edited:

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Is bifurcation likely be listed as a feature using any other terminology? Not really sure if this is something that is going to be listed on a product page so I might need to scour the manual.
I guess probably easiest thing to do is drop a line to Supermicro for confirmation. Sounds like some people have managed to add support by customising their BIOSes, so fingers crossed hardware wise my X9 is ok, and support is available in a Supermicro BIOS update.

I'm unlikely to move devices around once it's all up and running, so my only concern there would be if a device physically dies. Definitely something to keep in mind during reboots and such though.

Yeah my VM pool usage will be low, as I'll only have a few VMs, and the other pools are basically for serving/saving media files, and don't get updated that often. One SLOG might do the trick. Worst case, I have room for multiple M.2 to PCIe adaptors, so I can always add a second SLOG if I see performance as a problem.

I assume when you mention the firmware update on the PM953 and similar drives, you aren't saying a firmware update is required for it to work with FreeNAS, but you are just making me aware that it might not be possible to update it in the future?

Lastly, will any single port M.2 to PCIe adaptor work? Are they all much of a muchness when talking about single port cards, and could I just go for the cheapest available option?

Thanks for all your advice!
Eds
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Lastly, will any single port M.2 to PCIe adaptor work? Are they all much of a muchness when talking about single port cards, and could I just go for the cheapest available option?
In principle. However, some have been known to be dodgy.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Actually sounds like the X9SRL-F supports bifurcation, so wondering if anyone has had any experience using the Asus (https://www.scan.co.uk/products/asu...0-slots-intel-vroc-support-for-asus-x299-moth) 4 port M.2 to PCIe adaptor?

It's not super expensive, so might give it a go, and stick in a 960 Pro for ESXi boot and FreeNas VM storage (maybe also L2ARC and Swap as Stux has done), and then a PM953 for SLOG.

EDIT: Well, looks like it might not be any good, presumably because of VROC: https://forums.servethehome.com/index.php?threads/nvme-boot-with-supermicro-x9da7.13245/#post-173369
This guy has tried one, and even with bifurcation enabled, he is only able to detect the device installed on slot 1.

Also looks like I would have to mod my X9SRL-F BIOS to introduce NVMe support to boot ESXi from a 960 Pro: https://www.win-raid.com/t871f50-Gu...rt-for-all-Systems-with-an-AMI-UEFI-BIOS.html
Alternatively I could boot ESXi from a USB drive, then just create a datastore on the 960 (ESXi has native NVMe support(?))
 
Last edited:
Top