SCALE bonding modes

crispyduck

Cadet
Joined
Jan 30, 2022
Messages
9
Hi! First thanks for SCALE, really awsome and I already use it since some month for my home NAS.

Now I am about to switch my lab environment from FreeBSD HAST setup being a NFS4.1 datastore for ESXi hosts to TrueNAS SCALE. Actually I am just testing on one server.
As I am limited to gigabit network, I used in the past NFS session trunking with 4x1GB. Now I had to find out that nfs-ganesha used by SCALE is not supporting session trunking as in the RFC for 4.1. :-(
But great that SCALE supports docker out of the box, so for testing it was quiet easy running a NFS server in docker which now with the loaded modules supports nfs session trunking. So I get reads and writes on my on ESXi mounted NFS datastore up to 450MB/s.

Tested this now for a week, and it works fine with several ESXi hosts connected.

As next step I would like to try the SCALE cluster functionality. But here I have now problems planning the network config for it.
All my servers have 4x1GB uplinks to a Cisco switch. For gluster I need bonding, suggested is balance-alb or LACP.
SCALE does not support balance-alb; why?
But okay, I can use LACP, but what should I then do with my NFS session trunking? It is like iSCSI MPIO, it makes no sense to run it on top of a bonded interface.

Any way to have both on the same physical uplinks? Or specify a uplink for each vlan,...? e.g. 6 vlans, vlan_mgmt, vlan_nfs1-4 and vlan_gluster, vlan mgmt and gluster should be bonded and nfs1-4 one per uplink.

Is there any way to do this? On top of a ESXi it should be possible to do this with multiple port groups and correct active/standby settings. Any way to configure something similar in linux and scale?

Same for containers and VMs running on SCALE, is there a way to have my mgmt and storage network on a bond while still be able to assign individual uplinks to containers and VMs?


Beside this bonding questions, does it at all make any sense to serve NFS via session trunking, or also iSCSI MPIO when the underlying storage is a redundant gluster with bonded 4x1G? Reads should be fine, but writes? Will writes to one NFS datastore or iSCSI volume ever utilize more than one gluster network uplink?

regards,
crispyduck
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Once you've bonded the links.. they are treated as one IP interface. So its probably pointless to run a higher level protocol that tries to use multiple interfaces. Does anyone know of an exception to this?

As for balance-alb... I'll have to check.
Its a linux specific mode... TrueNAS middleware may not have enabled.
The documentation in this area still needs some work... so perhaps its a hidden feature.
 

crispyduck

Cadet
Joined
Jan 30, 2022
Messages
9
Thanks! I am not so familiar with bonding/if configuration under Linux.

Thought it should be possible to configure following:

Code:
eth0  --- eth0.101 -- iSCSI IP 1 / NFS IP 1
            \ eth0.100 ----------------------------
                                                    \ __ bond100 -- IP
eth1  --- eth0.102 -- iSCSI IP 2 / NFS IP 2         /
            \ eth1.100 ----------------------------


Sure, this wont work with LACP, but should it be possible with xor, alb,... ?

br
crispyduck
 

crispyduck

Cadet
Joined
Jan 30, 2022
Messages
9
Hi, any suggestions here? Will SCALE support balance-alb, which should not be a problem with the underlying Linux Platform?

What about the second scenario, is this possible somehow? I played also a little with macvlan, which works also perfect with a bond on top of it.

I know it will be hard to get all the possible network scenarios in the GUI/CLI, but is there a way to do more complex configurations manually and store it somehow? I don't know how truenas is exactly storing the network config and how it is applied, maybe there is a way to add advanced network config, e.g. a config file, script,... that is executed by trenas network config.

Sure, I can also create a startup script, but I would prefer to have a official way, eg. by adding the possibility/expert mode to add own network scripts,...

What do you think about this idea?

regards
cd
 
Joined
Jun 2, 2019
Messages
591
LACP works in CORE. Last time I tried SCALE LACP, while it will allow you to bond two interfaces together, it does not improve combined BW because SCALE is using the same virtual MAC address for both interfaces.

Never got resolution on my inquiry and ticket from months ago, thus I gave up asking.



Code:
root@NAS-3[~]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 192.168.69.13  netmask 255.255.255.0  broadcast 192.168.69.255
        inet6 fe80::782a:19ff:fe90:71dc  prefixlen 64  scopeid 0x20<link>
        ether xx:2a:19:90:71:dc  txqueuelen 1000  (Ethernet)
        RX packets 2233519  bytes 2915961062 (2.7 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 388757  bytes 31589193 (30.1 MiB)
        TX errors 0  dropped 3 overruns 0  carrier 0  collisions 0

enp5s0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether xx:2a:19:90:71:dc  txqueuelen 1000  (Ethernet)
        RX packets 2152673  bytes 2904851795 (2.7 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 356821  bytes 26033961 (24.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0x81500000-8157ffff

enp6s0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether xx:2a:19:90:71:dc  txqueuelen 1000  (Ethernet)
        RX packets 80846  bytes 11109267 (10.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 31936  bytes 5555232 (5.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0x81400000-8147ffff
 

crispyduck

Cadet
Joined
Jan 30, 2022
Messages
9
Oh, good to know. THX!
I only shortly tested LACP, but did no further load test as it is anyway not what I want actually.

I need Multipathing, iSCSI and NFS Trunking, so I would like to configure on each of my 4 nics, a VLAN with a IP in a own subnet, at the same time I would like to bond these 4 nics for fault tolerance on my mgmt, as well as for load balancing for other services.

This will for sure not work with a portchannel, but should work e.g. with balance-alb,...

I was also able to configure this on the shell, but its not supported by SCALE. So as the system is capable of doing it, maybe it would be really a good approach to allow such advanced setups through extra script, config, or something else as I described above.

I like the Idea behind SCALE, but maybe it is not flexible enough for my Lab environment and it would be better to use a standard distro and create my storage, cluster,... manually.

For my home NAS and also for the ones of my Friends it is great. Only thing that concerns me here a bit is that k3s is really resource hungry. Or at least has high CPU load. How is your experience here?

regards,
cd
 
Top