Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]

Joined
Dec 2, 2015
Messages
730
The X10SDV-TLN2F board also has 10GB networking, which might be useful in the future.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
My main concern besides heat with the Xeon, is the 6 SATA ports. I need to use all 6 for my disks.

This is why I use an m2 boot drive for esxi

Do you lose a SATA port if you plug in an M.2?

Certainly not if it’s a PCIe NVME M.2

Part of me wants the 8 core, very low power machine but damn those single core benchmarks are kind of worse than I expected. FreeNAS isn't *that* multi-threaded is it?

Seems pretty multithreaded to me. SMB is not.

In the benchmarks I’ve seen, an avotondenverton atom c3000 16 core compares favorably with a Xeon D1500 8 core.

Of course, need to consider the Xeon D 2500 as well now.
 
Last edited:

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
This is why I use an m2 boot drive for esxi



Certainly not if it’s a PCIe NVME M.2



Seems pretty multithreaded to me. SMB is not.

In the benchmarks I’ve seen, an avoton c3000 16 core compares favorably with a Xeon D1500 8 core.

Of course, need to consider the Xeon D 2500 as well now.


Thanks for the reply Stux.

Didn't know a PCIe NVME m.2 negates the loss of the SATA port, that's very cool and I guess makes sense (I'd still _much_ prefer USB drives for cost, redundancy (easy to add 2) etc - plus i've just personally had 0 problems with them in my rig)
I'll be looking at Denverton 3000 8 core, 3758. It seems similar to the Xeon D 1521 in my same price range. It's a difficult decision but I think I might just stick with the 25w part, after all my server cupboard... well it's a long, complicated story but it hits 33c INSIDE my house in summer (91.4)

Xeon D 2xxx series, IIRC are all higher power again.

So the last thing I guess I need to know, is which ram to get.

EDIT: one other thing, I googled, it sounded like SMB will eventually get multi-threading, right?
 
Last edited:

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
sorry to leverage your thread but I need someone smart to outright confirm for me, if these ram sticks (2 pack) would definitely work in the SuperMicro denverton boards.
I am not Stux, but it looks pretty clear to me. The board spec says it supports "Up to 256GB Registered ECC DDR4-2400MHz" and the memory is "DDR4 2400 (PC4 19200) 2Rx8 288-Pin 1.2V ECC Registered RDIMM compatible Memory".
DDR4, Registered, ECC, 2400; all boxes ticked.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
I am not Stux, but it looks pretty clear to me. The board spec says it supports "Up to 256GB Registered ECC DDR4-2400MHz" and the memory is "DDR4 2400 (PC4 19200) 2Rx8 288-Pin 1.2V ECC Registered RDIMM compatible Memory".
DDR4, Registered, ECC, 2400; all boxes ticked.

Thanks Chris, I only stress because memory compatibility can be a real mess at times, especially ecc buffered, registered, unbuffered, etc.

I'll take the gamble in the next hour unless someone else said wait (!)
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
it looks pretty clear to me.
On the other hand some supermicro X10 boards) are mentioned in our forums as RAM-picky... Having said that - this board is not X10. So I guess I am no help here. At least I am not surprised @diskdiddler is looking for sanity check...

Other wild guesses, I guess a bit too late...:
  • Does the store have some friendly return policy?
  • They claim the product is supermicro compatible so maybe they accept compatibility claims or at least can list tested motherboards?

Sent from my mobile phone
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
On the other hand some supermicro X10 boards) are mentioned in our forums as RAM-picky... Having said that - this board is not X10. So I guess I am no help here. At least I am not surprised @diskdiddler is looking for sanity check...

Other wild guesses, I guess a bit too late...:
  • Does the store have some friendly return policy?
  • They claim the product is supermicro compatible so maybe they accept compatibility claims or at least can list tested motherboards?

Sent from my mobile phone

I'll be ordering it to a house in California I'll be at in a day, but I've no real way of testing it for 2 weeks until I return to Aus.

Even then I probably need another week or two before I can consider firing the memory up with the board.

Memory can be a picky thing unfortunately.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
This is why I use an m2 boot drive for esxi



Certainly not if it’s a PCIe NVME M.2



Seems pretty multithreaded to me. SMB is not.

In the benchmarks I’ve seen, an avotondenverton atom c3000 16 core compares favorably with a Xeon D1500 8 core.

Of course, need to consider the Xeon D 2500 as well now.
Hi @Stux , I am wondering if you have any IOPS number for your pool? I am also using p3700 as my slog and my sync write IOPS was less than I expected (10K IOPS 4K writes). Now I don't know if there is something wrong with my setup or this is simply how things should be. If you can run iozone -a -s 512M -O in a sync=always dataset and post the output here that would be really helpful for me. Thanks
https://forums.freenas.org/index.php?threads/performance-of-sync-writes.70470/#post-486742
 
Last edited:

GrendelJapan

Cadet
Joined
Feb 17, 2015
Messages
2
Awesome build! I'm at 87% capacity for my current box (ancient Atom w/ 4 GB of ram) and picked up a node 304 maybe a year ago during a crazy sale, assuming I'd use it for my next box. Now that I'm further along, it seems like the case is pushing me towards more expensive component pieces, compared to the options of a larger box (although now that I think about it, an x10 is going to cost $200+, whereas something like a C236 WSI is about the same, so maybe it's more of a feature trade-off).
 

SMnasMAN

Contributor
Joined
Dec 2, 2018
Messages
177
Two SLOGs might saturate it’s capacity. But better than nothing. And only if it happens together.

So give it a try and monitor it. If you see sustained 90-100% busy then you’re probably saturating the slog.


@Stux - I thought that you could NOT use a single device as SLOG on more than 1x datastore? (ie you cant use 1x p3700 as SLOG on 2x different datastores).

Am i wrong? (and if im wrong, do you just make 2x partitions on the p3700, and then for adding the SLOG to the data stores you need to do via the CLI and not the GUI?)
thanks!

EDIT- i can answer my own ? here - yes it is possible to use partitions of a SLOG , and thus spread a single intel optane across 2 or more datastores - although it is highly NOT recommended!
 
Last edited:

Zervun

Cadet
Joined
Mar 12, 2019
Messages
5
Hey Everyone - I am a little confused about is around my networking setup and I was hoping someone could help. I have tried following the node setup as much as possible - Thanks Stux!

I am not by far a networking expert with ESXI but worked as a network/security engineer for many years.

  • I have basically a flat internal network at home all on 10.0.2.0/24 except for my DMZ where I have 4 static IPs I haven't used yet
  • Server is a Supermicro x9dri-ln4f+
  • I'm using a Mellanox 2x 40gig port card into my backbone (Brocade 6650) with only one interface plugged in currently
  • I have a pair of m.2 960 evo plus 256gig which serve up esxi/freenas (mirroring boot to the other one)
  • I have a LSI HBA in passthrough for all other drives in my supermicro 846 chassis
  • I have an intel p900 (might be replacing it with a p3700 soon for power protection) for SLOG
  • ESXI is on 10.0.2.50
  • Storage Network mtu 9000 is configured as 10.0.3.1 within ESXI
  • FreeNAS is on 10.0.2.60 (vmx0)
  • FreeNAS Storage Network that I will serve up NFS is on 10.0.3.2 (vmx1) for VMs

I have 6x 12tb Seagates in a mirrored pair pool that I would like to serve up SMB on 10.0.2.0/24 network

For fast VM storage I have 4x 1tb Crucial SSDs in a mirrored pair pool (also off the HBA) that I plan on doing NFS off of on the 10.0.3.0/24 network. It is plenty for storage space that I need on the VMs.

I am looking at serving up VMs both on the 10.0.2.0/24 network for internal VMs, but I'm also looking at serving up some VMs in a DMZ (I have not chosen a subnet yet but let's say 10.0.5.0/24 which will be mapped to a static IP on the outside coming from my untangle)

Where I am a bit lost is how to segregate the DMZ VMs (10.0.5.0/24) from my internal network as much as possible to prevent internal compromise.

My thoughts were to wire up the second interface on the Mellanox directly to my firewall (untangle) with an IP address within esxi on it's own virtual switch for the DMZ.

VM internal network 10.0.2.0/24 (Physical adapter 1)
VM DMZ network 10.0.5.0/24 (Physical adapter 2)
VM Storage network mtu 9000 10.0.3.0/24

Am I correct in the assumption that any VM spun up in 10.0.5.0, would also have access to 10.0.3.0 storage network as that is where the nfs datastore is? In the OSX example in this thread OSX only has a network adapter on the VM network. Is it getting access to the backend storage through the virtual storage switch within ESXI without defining the storage network adapter in the VM?

I am confused on how the routing internally on ESXI works with the storage network and the nfs datastore on the virtual switches. What happens if my 10.0.5.0 server is compromised - can it route to the 10.0.3.0 storage networking?

Sorry if I made this confusing, any help is appreciated. Perhaps the AIO isn't ideal for carving out a DMZ subnet.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Your ESXi does zero routing. It's not a router. It's a hypervisor. It does have a "default route" for its own internal operations, but this doesn't affect VM's.

The term "virtual switch" is a highly accurate description of what a vSwitch is, with a few exceptions.

What you set up for the ESXi hypervisor does not have to have anything to do with any other bit of networking, but it can also be tightly intertwined if you configure it to be. But it'll just appear to be a host like any other on your network.

---

You haven't explained what sort of topology you've used for your switching environment except to say "flat internal network" which implies that maybe you're just putting everything in the same broadcast domain and hoping that "DMZ" is some magic thing that will fix your security issues. Let me be clear - don't do this. If you do that, make it a truly flat network and skip the DMZ's and use a single network. (Don't do this, it's insecure, but at least it is honest about what's going on).

Each IP network you have needs to be a separate broadcast domain. Many networking beginners will overlay multiple IP networks on a single broadcast domain and think that this is buying them some separation. It isn't, at least not to an intruder. I run across networking "gurus" all the time who are convinced that they have good reasons to overlay networks or break the broadcast domain in various ways. Usually this just leads to misery of some sort in the future.

You can create separate broadcast domains by having multiple switches. This is the traditional but somewhat expensive way to do this. It has the benefit of being easy to understand. I'm not telling you to do this, but I want you to start out with that idea in your head to help make the next bit make more sense.

On a single switch, there is the concept of "VLANs", which many people are terrified of. Don't be. VLAN's are virtual LAN's. They are the networking equivalent of virtual machines. You can create multiple virtual switched networks within your physical switch. Your ESXi vSwitch also supports this. And by "supports" I really mean "was designed primarily for this".

So if you want to have a storage network and a DMZ network and an internal network and an external (upstream) network, you can easily have 4 VLAN's. Configure your switch to present four standard ports ("access mode") on VLAN's 10, 11, 12, and 13, and a trunk port with all 4 VLAN's going to your ESXi. Configure your ESXi vmnic0 to be connected to vSwitch0 (which it should be by default). Then configure vSwitch0 to have a port group on each of VLAN's 10, 11, 12, and 13. You can add VM's to any of these VLAN's and they will be totally isolated from other VLAN's. You can add a VMkernel port group that allows ESXi to access those networks for things like the ESXi NFS client. You can then have an Untangle VM or physical host to handle routing. You attach the networks you need to the systems where you need them.

This is how networks should be designed. You can have dozens or hundreds of VLAN's. There are some limits... if you're doing a virtual router of some sort, you are limited to 10 network cards per VM, which may be a practical limit on the complexity of your network. You don't want to do a trunk interface to a VM because it can cause some serious performance issues, though of course you can use physical hardware with a trunk interface to configure dozens or hundreds of networks.

Source: professional network engineer, been doin' this for decades...
 

Zervun

Cadet
Joined
Mar 12, 2019
Messages
5
Your ESXi does zero routing. It's not a router. It's a hypervisor. It does have a "default route" for its own internal operations, but this doesn't affect VM's.

------

So if you want to have a storage network and a DMZ network and an internal network and an external (upstream) network, you can easily have 4 VLAN's. Configure your switch to present four standard ports ("access mode") on VLAN's 10, 11, 12, and 13, and a trunk port with all 4 VLAN's going to your ESXi. Configure your ESXi vmnic0 to be connected to vSwitch0 (which it should be by default). Then configure vSwitch0 to have a port group on each of VLAN's 10, 11, 12, and 13. You can add VM's to any of these VLAN's and they will be totally isolated from other VLAN's. You can add a VMkernel port group that allows ESXi to access those networks for things like the ESXi NFS client. You can then have an Untangle VM or physical host to handle routing. You attach the networks you need to the systems where you need them.

Thanks, this answered my questions. I wasn't sure if there was some internal routing that ESXi was doing on its own. I haven't dealt with internal ESXi networking configuration wise (virtual switches/port groups) but have been in network/security for a long time.

My current network is for all intents and purposes flat (work in progress in the homelab) because I currently have not built out my internet facing presence (no inbound currently) which I will vlan off in a separate security zone as well as carving it up a bit with more vlans - on my to-do list. I have a brocade 6610 and a 6650 as well as a physical untangle firewall. All routing between networks is done through the untangle. Currently I am just using one internal vlan for servers and it is not exposed inbound from the internet. I have a separate vlan or my Ubiquiti wireless but left that out as it doesn't really come into play in this situation.

What I want is to have a DMZ vlan for internet facing ESXI VMs hanging off my brocade switch and routed through the untangle out (vlan configured on both of those). I was going to use the 2nd port on my 40gig nic on the ESXi server to connect to the brocade on that vlan (first port is on the internal network)

I don't want any connectivity from those VMs into my other internal VMs or internal network, ESXI storage network, and was wondering if the nfs datastore that those VMs would be using which is on the ESXi storage virtual switch would be routable to the DMZ VMs themselves because they are using that datastore. From what you said it sounds like it won't be exposed - it is just the VMkernel port group that has access.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Ima just step in here real quick and say.... man, one day, I hope to understand networking infrastructure, security, *insert more corporate buzz words here* well enough to even follow the last three posts in a meaningful way.
 

DaveFL

Explorer
Joined
Dec 4, 2014
Messages
68
Just found this. Great thread.

Are there any stability issues when using the Lynx AHCI adapter over say the usual LSI 2308/3008?
 

jmlinpx

Cadet
Joined
Oct 7, 2019
Messages
2
Great thread. I learn a lot from you. Thanks. I happened to have the same mobo. To best use the bifurcation feature, I bought a riser card, supermicro RSC-R2UT-2E8R and a NVME adapter supermicro AOC-SLG3-2M2 (you mentioned in thread). Also, to place these 2 cards in the case, I chose Node 804.

Not received yet. Will let you know if they fit and work.
 

jmlinpx

Cadet
Joined
Oct 7, 2019
Messages
2
Ok, i received all the parts i mentioned at #178. after setting the bifurcation as X8X4X4, it seems works. The extended 2 pcie slots are used as a nvme extender and a hba card in it mode. I installed 2 nvme card there to test and esxi successfully recognized them. I took some photos for sharing. (except my 512gb intel 660p was mistaken as 660p,2TB, don't know why)

Screen Shot 2019-11-07 at 11.41.34 PM.png


Screen Shot 2019-11-07 at 11.42.10 PM.png


I used the following build
supermicro mini itx X10SDV-TLN4F
node 804
riser card, supermicro RSC-R2UT-2E8R
NVME adapter supermicro AOC-SLG3-2M2
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
First of all; fantastic guide! I cant wait to try this in a few weeks. This is by far the most detailed, and best documented guide to setting up freeNAS in a VM-enviroment in a safe manner.

I do have a few questions though.

1) I am planning on using a Intel DC P3700 400GB AIC for ZIL/SLOG/SWAP, as you have in your guide. But what about the OS drive? I have a Samsung 850 EVO 250GB that isn't being used, is this a good enough option for installing ESXi, freeNAS, an possibly other VMs? My primary plan is to install VMs (except for freeNAS itself of course) on my main pool which is accelerated by the P3700, again, like described in your guide.

2) What about jumbo frames? If I set jumbo frames on the virtual network, will I have issues with communication with the rest of the network? 10Gb+ speeds are important in my virtual network, but I require compatibility with the rest of the clients (i.e. Plex Media Player on my TV and Apple TV, connection to my main desktop, etc).

3) My plan is as follows: ESXi Hypervisor, running 3 VMs: freeNAS, Ubuntu (Gameserver Hosting; TeamSpeak 3+CS:GO+Terraria), and Windows 10/Ubuntu: automatic torrenting). I might add on additional VMs later, but those are the only ones planned. Will my current hardware be sufficient, thinking especially of high performance on the freeNAS-side, as well as on the game servers?

Specs:
MB: Supermicro X9SRL-F
CPU: Intel E5-2650 v2 (8C/16T)
RAM: 64GB 1600MHz DDR3 ECC
OS Disk (ESXi + freeNAS): Samsung 850 EVO 250GB
HBAs: 2x LSI 9211-8i
Main pool connected through HBAs: 10x10TB WD Red (White labels, shucked from WD External drives)
Cache-card: Intel DC P3700 400GB (hw-passthrough to freeNAS, with 20GB SLOG, 128GB L2ARC, 16GB Swap)
NIC: Intel X540-T2 dual 10GbE

I will allocate at least 32GB RAM to freeNAS, and i'm not sure about cores/thread, but was thinking 4C/8T to freeNAS. My main concern is CPU and RAM. Both high performance freeNAS with encryption, and gameservers require a lot of resources. I am able to upgrade to up to 128GB RAM, and to 12C/24T E5-2697 v2 if required, but keeping my current setup is preferred, as upgrading RAM + CPU costs about 250$.

I have several performance goals with this server. One of them is sustained reads/writes around 1GB/s to my main desktop, which also has a 10GbE NIC. I would also like to be able to install my entire steam library onto the NAS, and still have good performance (load times in games).

This, in addition to a rock stable 128 tick CS:GO Server. :)
 
Top