ideal truenas scale design?

JonathanTX

Dabbler
Joined
Nov 4, 2021
Messages
13
i am trying to figure out what the ideal design would be for a TrueNAS SCALE implementation. i would like to serve CIFS/NFS shares, and iSCSI when its finally ready.

what would the ideal architecture look like for a 3 server deployment? forget storage and disks for the moment, i am really asking about network layout. how should management, user shares, heatbeat/replication, iscsi look on a diagram?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There is no such thing as an "ideal design" based on a vague statement like this.

Your ideal network layout is one suitable to meet your actual requirements. This could be three NAS units and a PC connected to a Netgear GS308T 8-port gigabit switch for a single user's PC, which is ideal because it is simple and sufficient. On the other end, you could have each NAS with dual 100GbE connected to some 100G switches in an environment with dozens or hundreds of VLANS in a complex datacenter topology that involves dozens or hundreds of bits of networking gear beyond the immediate attachment. You could have the gear in different racks to meet power diversity and resilience requirements, or even in different data centers connected via metro or WAN ethernet to meet geographic diversity requirements.
 

JonathanTX

Dabbler
Joined
Nov 4, 2021
Messages
13
@JWhite001: your reply was valueless.

@jgreco: i was really asking more how the logical connectivity will work, less of the physical.

for instance, i build and manage vmware clusters everyday/allday. typically, i would hope have 2 nics for management traffic, 2 nics for vm data traffic, 2 nics for iscsi traffic. most situations i have to collapse mgmt, vmdata, even vsan down to the same 2 phsycial nics in a blade cluster (but differnet vmk interfaces and different vlans) simply because blade servers just dont have the nic counts. most blades have at least 4 nics, so i can still get iscsia/b traffic on seperate nics. and etc etc.

so i am thinking about scale now, wondering what considerations need to be made for a cluster. is there replication/cluster traffic that is beneficial to be separated from front-end user traffic? this first cluster i am going to build is going to be 1G interfaces. can truenas use multiple interfaces for replication traffic to increase speed to copy data? i dont see anything in truenas right now that is even promting me to create different networks for different traffic... so does this mean all this is not even necessary?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So I'm going to kinda go through this and I want you to understand that I'm conveying what I think you need to know without knowing what you REALLY know, because I know people in your role. I work extensively with VMware too, but I'm a bit more of a "meta" person.

One of the things I've run into repeatedly is people who parrot the VMware guidelines for building networks but have basically little-to-no clue why some of the particulars exist. As an example, VMware likes to encourage a separate interface for vMotion traffic, and (perhaps obviously) the reason is that the traffic levels on this go from zero to SWAMPED to zero again as vMotions spin up. Likewise, segmenting iSCSI onto private storage networks ensures that the hypervisor has lots of I/O capacity that isn't being affected by vMotions or the traffic from virtual machines. This is what VMware teaches (/is supposed to teach) and hopefully THIS is all "obvious". But, if you actually look at the traffic on some of these dedicated interfaces, it turns out that swamping is actually rarer than you might expect.

There's another aspect here, though, and that is vSwitch performance. When you have a vSwitch with just a physical NIC and a vmkernel interface, this is very easy for the hypervisor to deal with. However, if you have a vSwitch with a physical NIC, several vmkernel interfaces, and five hundred virtual machines on dozens of vlans, the processing capabilities of the host are taxed, and it takes much more work to cope with the network traffic. Being ethernet, you are dealing with demultiplexing multiple streams off of what is essentially a serial connection. Latency tends to be higher, and performance is worse, because in addition to competition with other traffic, every received packet has to go through the vSwitch's hash tables to figure out what to do with it. And this is not trivial, no matter how much VMware likes to boast about their assembler-optimized vSwitch implementation.

Now, I would be very happy if every word I've said is entirely obvious to you, but for a lot of VMware engineers, they don't always quite get some bits of this until it is outlined. VMware encourages network segmentation by assuming the worst case for storage and vMotion, and creates some general rules to make lots of 1G networks because things can truly suck if you don't. However, by the time you get up to 40GbE, ESXi 6.0 limited you to only four interfaces (as an example), so you'd be needing to understand the whys, rather than configuring by rote, to be able to survive that limit, and that usually worked out to putting VM traffic on one or two of them, and everything else, storage, vMotion, management, on the other two. (Sort of assuming you do the usual dual ethernet failover thing for your networking, other designs clearly possible though.)

But FreeNAS/TrueNAS isn't a vSwitch implementation, so one of the major design considerations you have with vSphere simply vanishes and does not exist. Of course, you can actually do VM's on TrueNAS, and if you do, then, yes, you get a bridge ("vSwitch") and that should probably be on a separate interface...

Other than that, you are dealing with a single host. You are perfectly welcome to design this as you see fit. You can connect one of the mainboard's gigabit interfaces to an airgapped management network, and some Intel X520 ethernets dedicated to your iSCSI SAN networks for hypervisors, and an Intel XXV710 to that new 2.5/25G switch that handles your office PC's, and you can make a private network between your filers for replication traffic if you like.

These are all going to the same IP stacks on each of the filers, so there is no vSwitch efficiency issue at play. There IS traffic contention at play, of course, but that is something you can choose to design for as you wish. Unlike on ESXi hosts, 99%+++ of traffic on a filer is storage traffic of some sort, NFS, SMB, iSCSI, replication, whatever, so there are no "by rote" default rules for segmenting it. You are expected to understand what your needs are, and you can even screw this up and evolve it later as needed.

You can benefit from some of the concepts from ESXi, such as, dedicating two access mode 10G network interfaces to iSCSI for your SAN will guarantee 20G of bandwidth for SAN, be mildly less latency, and therefore probably somewhat more performant than if you shoved everything on a single 40G vlan trunk, but shoving everything on a single 40G trunk will definitely work too.

I hope I've kinda given you some idea as to why you're not going to find anything that is forcing you into a particular architecture. There will certainly be things you can do that are more optimal for your specific needs, but without a pages-long description of your network, goals, etc., it isn't really possible to provide you with optimized guidance, and that's more into the realm of professional consulting anyways.
 

JonathanTX

Dabbler
Joined
Nov 4, 2021
Messages
13
no, vmware wise, im with you 100% for everything you said. totally get it, youre preaching to the choir.

skipping to paragraph 5, i guess it comes down to all that complexity really isnt necessary for a truenas scale implementation. i was just expecting there to be come pre-canned recommendations that i can start with, and if theres not... i guess thats fine. thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You've generally got a much freer hand with FreeNAS/TrueNAS to design as you see fit. That doesn't mean you can't screw it up, of course. ;-)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One of the boards I looked at early this year for my new NAS, had 2 x 1Gigabit, 2 x 10Gigabit over copper and 2 x 10Gigabit over SFP+s. That would have met many of the requirements to separate traffic without resorting to a PCIe card for more Ethernet ports.

In the end, the board I selected has 2 x 1Gigabit and 2 x 10Gigabit over copper. For me, just fine. (Though I do have some free PCIe slots in case I want something different.)

So, if you have some choices on boards, you may find one that meets your need.
 

MurtaghsNAS

Dabbler
Joined
Jul 21, 2021
Messages
17
I am going to answer with a bit of a Zen answer, but I think the answer is telling why there isn't much of a "cookbook" of configurations. My answer is just look at the sheer range of solutions people are chasing here. You have everything from home users using USB drives and SATA expanders to TrueNAS M-series Servers where they measure storage in Petabytes. You have people chasing 5 9s of uptime to people who can tolerate days of downtime. The TrueNAS solution you want is somewhere in that range. You just have to make it work for you.
 
Top