Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

hoitzed

Cadet
Joined
Jul 19, 2018
Messages
2
Followed the guide for a brand new build using FreeNas 11.4 and ESXi6.7 with similar hardware. I really appreciate the hard work put into this guide. Confirming everything works, but a couple things were different for me. I didn't need to edit the passthru.map for the Intel Lynx AHCI controller, the gui worked fine from the initial install. Also, changing the swap in FreeNas11 doesn't seem to work anymore.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Also, changing the swap in FreeNas11 doesn't seem to work any more.
Yes, they changed the whole way that system swap is created (it is created on boot and destroyed on shutdown) since this build was done and part of the reason for moving the swap was eliminated when they made that change..
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
I didn't need to edit the passthru.map for the Intel Lynx AHCI

Wow. I find that surprising :)

Does that mean that ESXi is officially recognising the built in SATA controller for pass-through now?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

hoitzed

Cadet
Joined
Jul 19, 2018
Messages
2
Wow. I find that surprising :)

Does that mean that ESXi is officially recognising the built in SATA controller for pass-through now?

I believe so, also the 10Gbe ports are also natively recognized in ESXi 6.7, no additional drivers required.
For reference, mine is a SuperMicro X10SDV-4C-TLN2F
 

Sisyphe

Dabbler
Joined
Nov 8, 2017
Messages
11
I believe so, also the 10Gbe ports are also natively recognized in ESXi 6.7, no additional drivers required.
For reference, mine is a SuperMicro X10SDV-TLN2F

I double-checked and I confirm: on my Supermicro X10SDV-TLN4F, Lynx AHCI controller pass-trough, sensor display and 10Gbe ports are working natively in ESXi 6.7.

However, my backup server mounts an ASRock D1520D4I and despite the Lynx AHCI controller has the same device ID than the Supermicro, pass-trough is not working, even if I edit passthru.map.
 
Last edited:

beerandt

Cadet
Joined
Apr 9, 2018
Messages
5
The p900 would make a good slog, but last I checked there were compatibility issues with ESXi pass-through to FreeNAS

Bought a 900P almost a year ago, before finding out about the esxi passthrough issues, and it drove me nuts trying to troubleshoot- the kernel error text on bootup was useless. Took me a couple of days to even figure out it was the 900p causing it. (First supermicro and freenas and esxi build for me. Thanks for the build report! It saved me days and days of googling.)

I gave up trying to get it to work, and was about to pull the trigger on one of the esxi supported Intel cards ($$$$$$$), but happened to re-check the bug ticket first.

Tldr- there's a fix/workaround posted that worked for me:
- ssh to ESXi
- edit /etc/vmware/passthru.map
- add following lines at the end of the file:
Code:
# Intel Optane 900P
8086 2700 d3d0 false

- restart hypervisor

After that, re-added the 900p to the FreeNAS VM, and it booted right up. ( FreeNAS 11.1u5, esxi 6.5u2)

The nature of the bug has me a bit nervous about relying on it though. I'm worried this might just be a brute force fix without addressing the underlying (I think timing related?) problem. And I'm not real confident in how to test it, other than to use it then check data integrity against a backup.

Hopefully someone around here smarter than me will know what to make of this progress.
 

usergiven

Dabbler
Joined
Jul 15, 2015
Messages
47
I was about 5 seconds away from purchasing a 900p until I heard they just don't work well with esxi. Are the issues just surrounding the passthrough of the 900p to a VM like a SLOG or is it also unreliable as a datastore to hold VMs?
 

beerandt

Cadet
Joined
Apr 9, 2018
Messages
5
I was about 5 seconds away from purchasing a 900p until I heard they just don't work well with esxi. Are the issues just surrounding the passthrough of the 900p to a VM like a SLOG or is it also unreliable as a datastore to hold VMs?

The issue is/was that the FreeNAS VM would crash/hang during install or boot, but only when the 900p was present in pass-through mode. In a ubuntu VM in esxi it seemed to work fine. On FreeNAS bare metal, it worked fine. (I did not do much testing other than creating a quick test volume when I was troubleshooting, since bare metal wasn't a viable option for me long term.)

If 900p pass through was disabled, the storage could be made available through an esxi datastore, although I have no idea what the data integrity or even performance implications are, whether it be for storage or as a SLOG. I assume (wild guess) you would loose some confidence in data integrity, since it's not managed directly through FreeNAS. Hopefully someone with more knowledge could chime in on this, as I'd like to know the answer as well.

On the other hand, if you just need to boot a OS image without permanent storage or changes (like a backed-up, non-persistent VM), then it might be fine. But again, wild guess on my part.

After the fix mentioned above, it seems to be working in pass though (And by working I mean the FreeNAS VM will actually boot within esxi, with 900p in pass though, and is visible/usable in FreeNAS as it should be) I have no idea regarding its reliability. That's waaaay beyond my pay grade. Like I said, I really don't understand the nature of the bug, which makes me nervous. If someone has a suggestion for how to test this somehow, I'd be happy to run it and share results. (At least until it gets put into production in a few weeks. Deployment can be delayed, but pulling it out post-deployment won't be an option).

For now, I have it added as a SLOG just for testing purposes, but have not had time to play around with it beyond that.
 

AVSION

Contributor
Joined
Dec 28, 2016
Messages
128
Hi Guys, thank you for this awesome thread, once i started couldn't stop reading, amazing information in the palm of your hand. actually when i did my first FN Build i bump in to stux primary FN build and copied it :) with small twist and enjoying the beast every day for media, learning and business.

i would like to change my home setup from bare bone FreeNAS to AIO ESXi with FreeNAS as VM. additionally to FreeNAS i would like as well install different VM distros in a performant way so it is enjoyable to work with for my media centre, plex, kodi, photo shop, video editing and other heavy duty applications (don't do games). from reading through i will need setup a SLOG and other caching for FN and VMs. for my existing hardware i use X10SRi-F that has 7 PCIe slots, non used ATM, XEON E5-2683 14 cores, 32GB RAM and 32TB HDD. what i'm thinking is to have the ESXi, FN, VMs, SLOG...sit on the PCIe SSDs and use the on board AHCI to passthrough the FN pool to ESXi/FN. my question what is the proper way to do it in terms of performant, use separate PCI SSD for the ESXI, VMs, SLOG..etc or its ok to use one PCI SSD with more storage for everything? where is the SLOG, Caching for freeNAS and VMs should be install if separate SSDs are used? additionally if you could recommend some compatible PCIe options for the split or single setup without breaking the bank but still have good performance? i already can confirm that i can passthrough Intel Wellsburg AHCI controllers by adding/editing the passtrou.map with ESXi 6.5 U2, 6.7 not working on my system i tried everything at this time.

thank you in advance
 

AVSION

Contributor
Joined
Dec 28, 2016
Messages
128
Hi @Stux,

Thank you for your amazing thread, no words!! following to my previous post i'm thinking of getting the intel 900p 512gb for the ESXi boot and datastore for the VMs such as FN, macOS High Sierra and more. with regards to the SLOG, swap and cache, iocage jails i have two spare SATA ports available (8 used for the FN pool) that i can use for SSDs. will that going to work? any Prons/Cons? if i add two SSDs to the spare SATA ports they should be show/available to FN? and lastly if it will work what SSDs you recommend?

i'v been testing with an old Dell PERC 6/i integrated SAS Raid controller and intel 120gb 540s SSD i was able to install ESXi 6.5 U2, FN VM and passthrough the Intel Wellsburg AHCI controller with all the pool available/ready to import in to FN.

thank you
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
A 900p sounds like overkill for the datastore. It really should be the SLOG. Whereas using SATA SSD for SLOG will work fine, but will be significantly lower performance than using the 900p.

Probably the best approach would be to use a pass-through HBA card for disks in FreeNAS, and pass-through the 900p (with work-arounds) as SLOG. And then use the motherboard SATA ports for datastores for ESXi.
 

AVSION

Contributor
Joined
Dec 28, 2016
Messages
128
Thanks @Stux that sound good, for the HBA any particular brand model you recommend? Regarding the 900p 280gb if i understand will be used it for SLOG, Swap and L2ARC (will leave 180gb Free) is that right? https://www.scorptec.com.au/product/Hard-Drives-&-SSDs/SSD-2.5-&-PCI-Express/70481-SSDPED1D280GASX

With the motherboard SATA you mean to use one separate SSD for the ESXi OS boot and rest of the ports for the VMs datastore? and what Model and SSDs you recommend? thanks hips

Edit - I did some research and can get the following from the local distributors:
1. HBA - i can get LENOVO M1215 + Mini SAS to 4x SATA x 2
2. ESXi, FN, VMS and Datastore - Samsung 970 Pro M.2 SSD (one or two) on Supermicro PCIe Carrier Card AOL-SLG3-2M2 (That will give me a good performance to ESXi, FN and especially to the OS VMs).
3. Intel 900p for the SLOG, SWAP and L2ARC

i have checked HW compatibility with FN and ESXi 6.5 and it is supported.

Let me know what you think?

Thank you,
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
You can put the slog, swap and l2arc on the 900p if you want. I did. Doesn’t necessarily mean it’s the right choice for you.

As long as the HBA is an LSI 2008 or similar chipset.
 

AVSION

Contributor
Joined
Dec 28, 2016
Messages
128
Hi Freenesxi's,

I got some of the FN/ESXi parts this week, been upgrading the system, great learning experience hands on. i would like to share some information especially regarding the Intel OPTANE SSD 900P PCIe, found while trying to install using @Stux guide as a reference for the SLOG/L2ARC/Swap setup.

The Good news, after installing the 900P and booting to ESXi 6.7 i can confirm the Intel 900P is showing available to passthrough to any VM out of the box with out the need to install any drivers or edit the passthrough map, just toggle to activate from the hardware section. however to pass it through to FreeNAS (11.2 BETA3) you have to edit the passthrough map! workaround mentioned before, otherwise freenas wont boot, it was stuck at kernel. updating the map freenas boot normally. :)

The bad news, to configure the 900P SSD to use the 4k sectors and over-provisioning using the Intel SSD Data Centre Tool, it didn't find the 900P, after an hour of macking around i checked the ISDCT information page surprise, the 900P does not listed on the compatible devices to be used with this tool. doing some more digging i found a reply from intel that as the 900P is a consumer product range it doesn't support 4K Sectors and OP features only the Data Centre Range support it. however in another reply from intel regarding OP Quote "over-provisioning is not strictly required in order to maintain high random performance and consistency. Please bear in mind that most drives come with over-provisioning by default, even though it is not visible to the end user in most cases." amm i will let the experts have their say. its annoying looking backwards if i know that i wouldn't go with the 900P. Challenge!! maybe one of the hackers can find a way to use the ISDCT with the 900P that will be very popular SLOG choice, let me know;)

so in that case with both steps off the plan i went straight to installing the 900P, as is on the FreeNAS and updating the pass map. following the gpart steps, i had first to wipe the SSD and then enter all the commands, that went smoothly.

finally to test the speed i installed Windows VM and used the built in initiator for a iSCSI Storage, for the tool i used CrystalDiskMark and got the following resolts as you can see write speed is slow compare to the read speed. however i didn't test while changing between disable/enable the sync not sure how to do that in FN/Windows.

I hope this will help you guys making the right decision choosing the slog.

Screen Shot 2018-09-16 at 12.36.00 pm.png
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
I have a bit of a conundrum choosing my new hardware, I've just found this out.


My 7 year old processor:
https://browser.geekbench.com/v4/cpu/search?dir=desc&q=AMD+N54L&sort=score
the processor I intend to replace it with.
https://browser.geekbench.com/v4/cpu/search?dir=desc&q=C3758&sort=score

Also this data: (same CPU I'm after, but less cores)
https://www.cpubenchmark.net/compare/AMD-Turion-II-Neo-N54L-Dual-Core-vs-Intel-Atom-C3558/477vs3129

Note the single core performance. Not sure what to do, it's mildly faster at best.




Then the pricing, is actually $10 less for the Xeon D
https://www.newegg.com/Product/Product.aspx?Item=296-0002-002Y5&ignorebbr=1
https://www.newegg.com/Product/Product.aspx?Item=N82E16813182973&ignorebbr=1

I know it runs a bit hotter and eats 25w more but it's certainly a predicament.

Anyone tried both? My main concern besides heat with the Xeon, is the 6 SATA ports. I need to use all 6 for my disks. Do you lose a SATA port if you plug in an M.2?
Part of me wants the 8 core, very low power machine but damn those single core benchmarks are kind of worse than I expected. FreeNAS isn't *that* multi-threaded is it?
 
Top