Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Re: L2Arc and Swap on the samsung. Works well. But when I have a high end PCIe PLP NVMe, I use that instead :)

I'm interested in peoples results with the 4x M.2 cards, as long term I plan to replace the P3700 with one, and then use a SLOG m.2 on that, and some other drives etc... the P3700 will probably migrate to my fullsize system.

ESXi booting from a USB is probably a good way to go. NVMe booting is what requires bios support, not NVMe per-se, and ESXi is specifically designed for USB booting. It makes very few changes to the USB drive, so it won't have the same longevity problem that FreeNAS has. I can't boot my X10SDV from USB *and* passthrough the USBs into VM as there is only one USB chain, and very few USB ports.

Also, there are best practises out in ESXi land about making your USB esxi boots from a script, so that you can rebuild them easily etc... I've never done this.

This is a good website: https://www.virtuallyghetto.com

VMware-kickstarting: https://www.virtuallyghetto.com/vmware-kickstart
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358

Asrock has a card too

https://www.asrock.com/mb/spec/product.asp?Model=ULTRA QUAD M.2 CARD

Might be worth searching on servethehome, anandtech and ssdreview about them

An interesting feature of the ASrock version:

Screen Shot 2018-02-22 at 5.20.20 pm.png


It could be that the reason some people don't see all their drives is from intollerances in the tracks...
might you... I don't think I see retimer chips there. Retiming chips are normally recommended for bifurcating slots.

SuperMicro makes a 2x M.2 adapter. You could check with them if that's supported on the X9. Pretty damn safe approach to take.
 

SlinkingAnt

Cadet
Joined
Oct 20, 2017
Messages
1
Then we can do initial config of the VM. You should select at least 2 vCPU, but I'm going to use 8, because I want FreeNAS to be performant, and that is the maximum allowed in free ESXi.
Although the might seem the best option to get the most performance out of it, I would not advise it, especially if you assign more vCPU's then you have available in total. One of the main benefits of Virtualization is the possibility to assign a higher number of vCPU's to VM's then physically available. So far the good part.

When you have assigned more vCPU's then threads available, and your FreeNAS VM wants to do something, it will wait until 8 threads are free for use. If you have more VM's running that are using cpu-time, the chances are that the cpu-wait time of your FreeNAS VM is higher with 8vCPU's then if you decrease it to 4 vCPU's. It will happen much more often that 4 threads are available.

The main metric to look at in the ESX web interface is the CPU ready time, it should be fairly low. You can find it on the Monitor-tab per VM. In a few words, it's the time that the VM has to spend waiting for free cpu-cycles before it can execute it's instructions. Even if it only needs 1 thread to execute, it still need to wait for 8 threads to be available.
My quadcore Atom at home it hovers around 0.5% on each VM (3 in total, 2vCPU's each, so some overprovisioning), but if I push one of the VM's, I immediately notice a performance degradation on the other VMs. At work we have a SQL-machine with 8vCPU's on a 24thread-host, where it's around 0.05% while still running 22 other VM's with multiple vCPU's.

I would definitely look to see what the actual CPU load is when you do some load tests on your FreeNAS-VM, and test it both with and without running other VM's. If you also use vCPU over-provisioning, you can/might get a better performance by reducing the amount of vCPU's assigned to your VM's

The rest of your guide is really good, when following your steps almost anybody can configure a FreeNAS VM on ESXi. I'm looking at buying one of the high-end Denverton-based boards from SuperMicro and do exactly the same, but then with a LSI-sas controller with 8x4TB, 2x M.2 drives for VM's/caching/slog and a single SSD for non-important/low io VM's, and with 4x 10gbit onboard, future ready O:). So far the price is the only thing that's holding me back, especially with the crazy DDR4 prices.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Although the might seem the best option to get the most performance out of it, I would not advise it, especially if you assign more vCPU's then you have available in total. One of the main benefits of Virtualization is the possibility to assign a higher number of vCPU's to VM's then physically available. So far the good part.

When you have assigned more vCPU's then threads available, and your FreeNAS VM wants to do something, it will wait until 8 threads are free for use. If you have more VM's running that are using cpu-time, the chances are that the cpu-wait time of your FreeNAS VM is higher with 8vCPU's then if you decrease it to 4 vCPU's. It will happen much more often that 4 threads are available.

The main metric to look at in the ESX web interface is the CPU ready time, it should be fairly low. You can find it on the Monitor-tab per VM. In a few words, it's the time that the VM has to spend waiting for free cpu-cycles before it can execute it's instructions. Even if it only needs 1 thread to execute, it still need to wait for 8 threads to be available.
My quadcore Atom at home it hovers around 0.5% on each VM (3 in total, 2vCPU's each, so some overprovisioning), but if I push one of the VM's, I immediately notice a performance degradation on the other VMs. At work we have a SQL-machine with 8vCPU's on a 24thread-host, where it's around 0.05% while still running 22 other VM's with multiple vCPU's.

I would definitely look to see what the actual CPU load is when you do some load tests on your FreeNAS-VM, and test it both with and without running other VM's. If you also use vCPU over-provisioning, you can/might get a better performance by reducing the amount of vCPU's assigned to your VM's

The rest of your guide is really good, when following your steps almost anybody can configure a FreeNAS VM on ESXi. I'm looking at buying one of the high-end Denverton-based boards from SuperMicro and do exactly the same, but then with a LSI-sas controller with 8x4TB, 2x M.2 drives for VM's/caching/slog and a single SSD for non-important/low io VM's, and with 4x 10gbit onboard, future ready O:). So far the price is the only thing that's holding me back, especially with the crazy DDR4 prices.

My system has 8 cores and 16 threads. vWait is normally 0.05%.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
I have a query outstanding with Asus about their card working with bifurcation, so will report back on what they advise.

The Asrock one looks to also be another option worth trying. I love the idea of getting 4 drives on one PCIe slot, although for me on an X9SRL-F for now, it can't be fully utilised due to the PCIe lane spread across slots. Investment though for when I get a newer/nicer board.

Just had a seller of a PM953 advise me they can't complete my order as they "ran out of drives". They can offer me an "SM953" (which I don't think exists as I think they mean 951?). Anybody know if this would be ok as a SLOG? My understanding is SM and PM determine flash type (TLC, MLC etc.) so it would use the same controller and still have PLP?
 

loch_nas

Explorer
Joined
Jun 13, 2015
Messages
79
I dont know what those letters stand for, but actually PM953 is U.2 and SM953 is M.2.
And SM953 exists.
 

loch_nas

Explorer
Joined
Jun 13, 2015
Messages
79
Oh well, what means sure. Maybe there is both, M.2 and U.2 of every model, I don't know. But that a PM953 is at least U.2 and a SM953 at least M.2 is something I quite know until now :D
https://s3.ap-northeast-2.amazonaws.com/global.semi.static/PM953_flyer_web-1.pdf
https://s3.ap-northeast-2.amazonaws.com/global.semi.static/SM953_Whitepaper-0.pdf

And let's say I trust the whitepapers of Samsung more than the product descriptions of ebay. But you know, I can't be for 100% sure, because I'm not a reseller and I'm not working for Samsung.

After quick research I've found out that PM953 is available in two forms:
2,5"(U.2) and M.2

So no reason to think that the product description on ebay is wrong ;)
 
Last edited:

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
That's ok then.
I've got an SM953 on its way now, but good to know the PM version will be ok as a replacement if needed in future.

Eds
 

saeed12345

Cadet
Joined
Feb 18, 2018
Messages
4
ok one question....
is there any way to manage esxi remotely?
the only way is to setup a vpn on a vm and do it like that?
 
Last edited:

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
It could be that the reason some people don't see all their drives is from intollerances in the tracks...
might you... I don't think I see retimer chips there. Retiming chips are normally recommended for bifurcating slots.

SuperMicro makes a 2x M.2 adapter. You could check with them if that's supported on the X9. Pretty damn safe approach to take.

Finally heard from Asus, who advised that their 4 port card does not support bifurcation:

Hello James,

Thank you for the update, but unfortunately this card does not support PCIe Port Bifurcation.

Eds
 

hackbeard

Cadet
Joined
Nov 11, 2017
Messages
2
on page 3 of this thread, after creating the bare metal usb stick there is one post "reserved" where i guess the creation of the data pools was described - maybe even more.
can you edit that?
something is not the working the way it should in my build, for example increasing the zvol. not sure what i missed there..
thx in advance!
 

I4C6XFQM

Cadet
Joined
Aug 7, 2017
Messages
6
Just gonne pop in and say thank you for this amazing buildlog and guide. I basically mirrored this build (unintentionally at first) and its been running fantastically over the past 6 months. I ended up buying a 400GB Intel DC P3700, a 500GB Samsung 960 EVO m.2 and a CyberPower CP1500EPFCLCD. Passing through the UPS to FreeNAS with USB and using Spearfoots AIO utility scripts to manage startup up and the shutdown process of all the VMs, which works wonderfully.

I did run into some problems with the fan control script. Sometimes the fans spiked to full blast for a couple of seconds. After setting debug to 4 I found below entries in the logs.
Code:
Unable to obtain SDR reservation
Unable to open SDR for reading
Sensor data record "CPU Temp" not found!
2018-01-14 19:12:13: Unexpected CPU Temp ().

> Error: no response from RAKP 1 message
> Error: no response from RAKP 3 message


It seems in my case that the IPMItool sometimes doesn't directly get a response. Weirdly enough manually running the command when the issue occurs returns the values pretty much immediately.
Increasing the timeout of the IPMItool with '-N 3' made it a lot more stable.
 
Last edited:

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Just wanted to add my thanks. In reading through this guide again there is a lot of quality work here. Anyone looking to implement FreeNAS, especially if as a VM, should read this. Thank you.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Stux, if you have a datastore connected via iSCSI on the AIO box, what happens when you reboot the physical device?

What I've found, is that if I set the FreeNAS VM to auto start first, it takes too long to fully boot and as such any VM on a datastore connected via iSCSI can't start as it shows invalid as the datastore hasn't mounted.

Have you found the same or dealt with this in any way?
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Stux, if you have a datastore connected via iSCSI on the AIO box, what happens when you reboot the physical device?

What I've found, is that if I set the FreeNAS VM to auto start first, it takes too long to fully boot and as such any VM on a datastore connected via iSCSI can't start as it shows invalid as the datastore hasn't mounted.

Have you found the same or dealt with this in any way?

I think he or someone else wrote a script to delay the start / control the flow of those dependent on an ISCSI.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
Yes. You need to use a script to refresh the iSCSI and then load the dependent VMs. The script is triggered from inside FreeNAS vm, which essentially uses SSH to the hypervisor

Same sort of idea for shutting down the VMs.

Alternative of course is to store your VMs on the ESXi boot volume m2.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Ah ok that makes sense. Any example scripts out there to work off of?
If not, seems like something I can probably piece together from other snippets.
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Yes. You need to use a script to refresh the iSCSI and then load the dependent VMs. The script is triggered from inside FreeNAS vm, which essentially uses SSH to the hypervisor

This may be a question showing of my lack of Linux skills, but where do all these scripts run from? Do the underlying OSs (however slim or cut down) of FreeNAS and ESXI accept outside scripts and how so? I believe I have an inkling of an idea as I set up a game server on CentOS and had to go through a similar process. My apologies if this is too much to answer. I knew I was going to need to get back into coding, but I had hoped it could be after the lab, website transfer, CCENT, Linux+, and so on.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
This may be a question showing of my lack of Linux skills, but where do all these scripts run from?
FreeBSD is not Linux, it's descended from the original Unix. Linux is Unix-like.

Do the underlying OSs (however slim or cut down) of FreeNAS
Phew, can of worms.

In Unix (and even Windows), besides running binary executable files or variations on that theme (like Java, which runs on the Java Virtual Machine or Python, which is interpreted), you can have scripts, which start by saying who is supposed to interpret them and the answer is almost always the shell (be it sh, bash or whatever). Shell scripts revolve around using commands like you would if interacting directly with the shell, with some additional constructs to allow you to have more than simple sequences of commands.

where do all these scripts run from
From the filesystem, like anything. They have to live somewhere and the filesystem is the only thing that exists. In practice, that means they're going to be somewhere on your pool.

https://en.wikipedia.org/wiki/Shell_script
 
Top