Round-Robin Link Aggregation Available in TrueNAS Scale?

el5

Cadet
Joined
Feb 9, 2022
Messages
4
I was wondering if in TrueNAS Scale the round-robin protocol is implemented in any way for link aggregation. In TrueNAS Core this option is available in the GUI, but it seems it might not be in TrueNAS Scale, at least not through the GUI (maybe through the cli?). And if it is, is it the same as what is designed in linux as bonding mode 0 (balance-rr, round robin)? It seems from some forum posts here that even the round-robin in Core is not exactly like the one in linux, but also more of a load balancing implementation.

But if it is the same as linux’s, I would like to use the round-robin aggregation protocol because it is the only one in linux I have found that can actually combine the individual transfer rates of multiple ethernet links into their theoretical total by using linux bonding in round-robin mode combined with separate vlans on different ports/links/subnets.

Anyway, if anyone more experienced with the inner workings of TrueNAS Scale can confirm if round-robin is implemented and configurable via the cli, I would greatly appreciate it. That way I could decide if I should attempt to get it running or bite the bullet and just upgrade my network to 10Gbps.

Thanks in advance.


Optional read below:

Details of what I am trying to achieve and how:

I would like to aggregate the 4 ethernet links on my quad NIC so that I can achieve combined network rates of about 75-80% of 4x1Gbps (about 3Gbps) which I am able to achieve on computers with regular linux distributions (ubuntu/xubuntu) configured with round-robin aggregated/bonded links. The throughput is not >95% because of the extra packet collisions and large out-of-order packet receiving when using the round-robin protocol, but I’m ok with that since I still achieve the higher data rates. To achieve this I create linux bonds of the four NIC ports and by putting each port on separate VLANs/separate subnets. I can link the blog post where I learned this if anyone is interested.

All my computers have quad NICs installed (plus the extra onboard NICs). I know it might be simpler to switch over to 10G or 2.5G networking, but I want to know if aggregating the links in round-robin is available in TNS before I decide to spend money to upgrade my network. Round-robin is the only protocol that I have confirmed works to merge the bandwidth of multiple ethernet links between linux machines. Currently, as a workaround, because I couldn’t figure out how to aggregate in round-robin mode in TNS, I just created four separate VLANs on their own separate subnets that I just access individually through the file manager (thunar, nemo) whenever I need to transfer larger amounts of data. I just initiate multiple transfers in parallel on the different IP subnets. It’s a little cumbersome at times, but manageable. The only issue I run into is when I need to initiate multiple transfers when the disks are not in any array, which is why I would prefer to merge the ethernet links and only initiate one disk transfer at a time from the hard drives.

Now, to avoid any confusion, I am not looking at LACP/LAG/802.3ad (mode 4 in linux bonding) since this protocol does not achieve what linux round-robin bonding does. LACP simply reserves one link between TNS and any client computer and uses one single link per client. The TNS server can talk to multiple clients at 1Gbps each so it can very well see up to an aggregate of 4Gbps from multiple clients at its quad NIC, but each client is only allowed a single 1Gbps link to the TNS server.

Also, I believe the load-balancing protocol available in the TNS GUI is not round-robin, but probably corresponds to either mode 5 (balance-tlb) or mode 6 (balance-alb) in linux and it too will maintain only one 1Gbps link to each client. It will simply balance the load to the different clients when sending out data.

The balance-rr (round robin, mode 0) protocol, instead, will send out packets in a regularly scheduled sequence to all 4 ports of the NIC, one after the other, and when each NIC port is placed on a separate physical ethernet link with different VLANs, then a single client will be able send/receive packets over the 4 individual links using an interface bonded to the four physical interfaces, while the linux networking bond protocol will be responsible to disassemble and reassemble the bonded packets and send them to the correct destination. Maybe SMB multi channel on windows does something similar, but that software is still in alpha testing and only for SMB. I also saw a post where someone achieved SMB multi channel in FreeNAS.

Thanks in advance.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Isn't this normally solved with a good switch?
 

el5

Cadet
Joined
Feb 9, 2022
Messages
4
I will go ahead and budget out a 10G network for my needs. Will be simpler than attempting what I originally proposed.

Thanks for the replies.
 
Top