Round Robing LAGG Bonding Different then Linux

Status
Not open for further replies.
Joined
Dec 29, 2014
Messages
1,135
It isn't really "bonding", it is load balancing. I looked at the link you reference, and it says so in there. I can tell you this is true in switches (Cisco, HP, H3C, Juniper) as well. It looks like a single interface, but no conversation can get more bandwidth than the size of one member link. The various load balancing methods are the hash that is used to determine which link out of the bundle will carry a particular conversation. It does really matter how you do it depending on the traffic patterns in your network. In most platforms I have seen, the default is to load balance based on destination information (mac address,IP address) or a perhaps a combination of those depending on the capabilities of the platform. In my network, the FreeNAS units have a LAGG to the storage network and each ESXi host has a single interface on the storage network. If the switch load balanced toward FreeNAS using destination, every conversation would take the same path. In my 10G switch, the LAGG load balances based on source MAC because of that. That does actually give me a reasonable distribution. I guess that is a fairly long and rambling dissertation on bonding/load balancing, but I didn't see a specific question in there. :)
 

webdawg

Contributor
Joined
May 25, 2016
Messages
112
Right,

But linux has bonding modes that give bandwidth that can exceed one link
 
Joined
Dec 29, 2014
Messages
1,135
But linux has bonding modes that give bandwidth that can exceed one link

The link you reference refers to all the modes with some form of balancing, so I do not think that is the case. That said, it is fairly likely moot if you are talking about 10G interfaces. There aren't a whole lot of things that can fill a single 10G NIC when reading from storage. I am thrilled when I get about 8G reading off my ZFS pools. You might be able to approach it with synthesized traffic (like from iperf) or perhaps off an SSD pool.
 

webdawg

Contributor
Joined
May 25, 2016
Messages
112
I have done this type of bonding in linux before, and you can combine (add up) bandwidth of more then one connection to get one fast connection.

It just does not work in FreeNAS atm, and I am wondering if it has ever been a feature of FreeBSD. From what I was reading, but I cannot confirm because I do not know who to ask, is that the Round Robin bonding is just different then Linux's.
 

webdawg

Contributor
Joined
May 25, 2016
Messages
112
I have also posted in the FreeBSD forum:

https://forums.freebsd.org/threads/lagg-bonding-freenas.67266/

I just need someone to confirm 100% that FreeBSD round robin bonding is supposed to sum up connection speeds in some manner.

Here is the background on what I have been testing w/ FreeNAS:

This was my last test that I performed. It required me to pull one box from one building and put it in the same building:
  • 1 quad port intel gigabit NIC on each server
  • I directly connected em0 to em0, em1 to em1, em2 to em2, and em3 to em3 via patch cables
  • Setup the roundrobin lag identical on each server
I would get 25 mbits

First Test:
  • I have a seperate vlan for each em0, em1, em2, em3
  • I had the vlans floating across some pre configured laggs switch to switch (3 switches in total between the boxes)
  • I have a 10gbit link between two of the switches
  • Setup the roundrobin lag identical on each server
I would get 25 mbits

Second test. I eliminated the laggs because I did not know if the single mac (built by the round robin lagg) across all of them was forcing all 4 to a single lagg port.:
  • I have a seperate vlan for each em0, em1, em2, em3
  • I patched in directly to the switches with 10gbit SFP+ fiber eliminating the laggs
  • Setup the roundrobin lag identical on each server
I would get 25 mbits

I tested the VLAN configuration. Configuring seperate IP addresses on each NIC and doing iperf tests one at a time between them. No mixing of layer 3 or layer 2 traffic confirmed.

I even went as far as pulling each link, one at a time, and running iperf across the configured round robin lagg. That is I would remove em3 on both servers, run a test. Do the same with em2, em1, until I just had em0 left. I would always get 25 mbits until I just had one link.

IE, if I had a round robin lag, with just em0 to em0. I would get the expected gigabit wire speed.

I am not trying to double post, I just need someone from the FreeBSD world to confirm that round robin bonding is supposed to do what it does in Linux. Sum up link speeds. I can then try and figure out what I am doing wrong, or why it is not working w/ FreeNAS.
 
Joined
Dec 29, 2014
Messages
1,135
I have also posted in the FreeBSD forum:

Not to get too grouchy, but you can post it wherever you. It doesn't change the fact that I don't buy it. That isn't how things work in network infrastructure. That is my primary day job, so I feel confident saying that. Maybe you can cobble something together if you connect two linux boxes up back to back, but that doesn't make it work that way everywhere else. Looking here https://en.wikipedia.org/wiki/Link_aggregation, maybe you can achieve some of that with Adaptive load balancing, but you would have to have the same thing on both sides. That makes it IMHO impractical.
 

short-stack

Explorer
Joined
Feb 28, 2017
Messages
80
I just need someone to confirm 100% that FreeBSD round robin bonding is supposed to sum up connection speeds in some manner.

FreeNAS utilizes LACP, which is an IEEE open standard. It works the same everywhere. Instead of coming here and complaining about FreeBSD/NAS not doing 'round robin' the same as Linux, look up the protocols that are being used. Cisco et.al have their own proprietary standards, but they are proprietary and only work between like-branded devices, LACP is the open link aggregation standard.
 
Status
Not open for further replies.
Top