Resource icon

LACP ... friend or foe?

G8One2

Patron
Joined
Jan 2, 2017
Messages
248
I like to look at LACP this way..... Its used for failover if one link goes down, you dont lose your connections. Or you have it configured as a single interface, in that its just like the highway you drive your car on. Speed limit stays the same, your just adding lanes so more cars can travel down the road at the same time. If you're after more speed, you need to upgrade your network to 10Gb. LACP wont make anything faster, it will let you just have more links running at gigbit speeds without bogging down others. At least thats how I understand it, in a dumbed down kind of way.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,825
If you want to use the highway analogy, think of each user driving a single car. So, more lanes may lead to more throughout if there are many users AND the network switch is smart enough to allocate / balance the traffic evenly AND the NAS can handle the traffic. A lot of rings in that chain.

Given how inexpensive NICs and 10GbE switches have become, it’s pretty tempting to jump on the 10GbE bandwagon. However, unless you have a SSD pool or multiple VDEVs in your HDD pool, you won’t likely see anything near 1000MB/s throughout. My server manages 300MB/s with my Z3 1VDEV pool in shorter bursts for big files and much less for smaller files.
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
There are more than a few "it depends" in there. I am not a iSCSI expert, so I will let the others are more knowledgeable on that part comment. I do believe that iSCSI has its own load balancing/distribution method, so that would probably be the way to go if you are trying to squeeze more out of 1Gb links for iSCSI storage. Because LACP (or any other ethernet aggregation technology) is load balancing and NOT bonding, you have to understand your workload and how to properly distribute things to get good utilization. Also remember that you are only as fast as the slowest component, so your pool structure will matter a lot there based on workload. I have 40Gb links on the storage side, but I never get above 20Gb throughput because of my pool structure. My stuff is my home lab, so I wasn't willing to lose the space for a bunch of mirrored drive to get them IOPS I would need get more storage throughput. I do consistently get over 39Gb through through iperf, so I know my storage/pool is the bottleneck for me. It meets my needs though. Truthfully the 40Gb stuff is just because I can! :smile:
 

NickDaGeek

Cadet
Joined
Dec 2, 2019
Messages
8
Thanks for the replies Elliot, Constantin and G8One2. It kind of confirmed what I was thinking. Adding more lanes doesn't change the speed limit. Bottleneck may not be the network either it might be the storage pool too and IOPS are expensive in terms of storage devices. I have 8 x 1.2 TB SAS drives but as I said only in single raid 0 containers because of whatever strange reason (I think its ZFS not wanting hardware raid controller in the way of the disk instead wanting JBOD) Either way the disks are invisible if I create a raid array out of them so had no choice.

That said I am noticing that the iSCSI is dropping sometimes for no particular reason at random times of the day it seems as reported by the FreeNas so I may not be out of the woods yet.
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
if you're seeing MAC addresses on other interfaces on your switch and it thinks there is flapping going on you do NOT have lag setup right. Is the connection active/active for LACP and does your switch support active/active LACP. You should see something on the switch side that says LACP is active and you have 4 members bound. I am accustomed to doing this on a Cisco switch and it works every time I do it. LACP is session based so once a user's session is on one interface it stays there, if the application creates hundreds of random tcp ports you may see the one users session balanced across more than one port. FTP is most likely going to hang out on one port and even if you have 10 users trying to FTP at the same time you might see one port with 5 users, one with 3, one with 2 and one with none. It depends on how your switch balances the traffic, if you can do L7 load balancing and FreeBSD supports it you might get better loading.
 
Joined
Dec 29, 2014
Messages
1,135
I don't know if you are just using FreeNAS for storage, or if you are also using it as a hypervisor. I can't comment on the latter as I don't use the virtualization features of FreeNAS. I do think that temporarily disconnecting the extra links in your LAGG to see if that resolves the drop issues are resolved. That will tell you if there is a problem with the LAGG config.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,087

Howdy. I hear someone asked for the iSCSI guy. :)

ESXi 6.5 hosting a Windows 2012 R2 VM running Veeam Backup and Replication. It has a pair of iSCSI targets on a pair of NAS boxes. Freenas 11.7 and ReadyNas. I am not really worried about the ReadyNas as it was a really poor choice of my predecessor (a home user NAS with an operating system geared to that at heart) and is an "offsite" backup over a wireless bridge to another site so the bottleneck is the bridge.

The Freenas is sitting on a LAGG of two 1Gbe NICs on an HP ProLiant DL380 G6
Processors 2x Intel Xeon X5550 2.67 Ghz
Memory 32gb
Network 1x Quad Port HP NC364T Network Card connected to a Cisco SG500

@Elliot Dierksen is on the money here - iSCSI has its own load-balancing protocol called MPIO (MultiPath Input/Output). The short answer here is that LACP and MPIO don't mix. LACP takes several interfaces, bundles them together and presents a single IP address. MPIO takes those several interfaces, keeps them distinct with their own IP addresses, and shotguns the traffic across all of them.

For a long answer, I'll have to set up an MPIO resource similar to this thread.

Further, having read a few other posts, I am not certain that my FreeNas box would have the IOPS to handle the extra network performance anyway. On top of the discussion of the poor performance that some are saying Xeons have as processors for FreeNas there is something strange in the way HP's onboard hard disk controller sees disks for FreeNas. In order to get them to be seen in FreeNas at all I had to create raid-0 arrays consisting of only 1 disk. I am not sure if that is just the way ZFS prefers it to be or if this is a bug in the HP drive controller and if that is also impacting performance.

"Yes" to all of it. The ciss driver used for the SmartArray P4x0 series is not particularly good under FreeBSD, ZFS doesn't like hardware RAID and vice versa due to the way ZFS throws large volumes of traffic in "transaction groups" and often makes RAID controllers barf, and having personally worked with the P410 in the HP G6 series, it's a pile of crap in my opinion. Thankfully a "proper HBA" in the form of the Dell PERC H200/H310, IBM M1015, or HP H220 is very affordable online. Swapping in-place though is going to be impossible though because your SmartArray has set each drive up as a RAID0. Sorry.

The good news is that if you can get it swapped you should get a decent bit of performance improvement. (I've got some other hardware suggestions, but rather than clutter this thread, maybe a new one. Throw me a DM if you do, the @ tag functionality seems to be more miss than hit these days.)

Thanks for the replies Elliot, Constantin and G8One2. It kind of confirmed what I was thinking. Adding more lanes doesn't change the speed limit. Bottleneck may not be the network either it might be the storage pool too and IOPS are expensive in terms of storage devices. I have 8 x 1.2 TB SAS drives but as I said only in single raid 0 containers because of whatever strange reason (I think its ZFS not wanting hardware raid controller in the way of the disk instead wanting JBOD) Either way the disks are invisible if I create a raid array out of them so had no choice.

That said I am noticing that the iSCSI is dropping sometimes for no particular reason at random times of the day it seems as reported by the FreeNas so I may not be out of the woods yet.

In the case of iSCSI specifically adding "more lanes" does change the limit if done with MPIO. To extend the metaphor, LAGG/LACP widen the highway. If you do it with MPIO, you'll not only widen the highway, but add a bigger on-ramp to get more cars on at a time.

The 8x1.2T SAS drives should throw a decent (for spinning disk) amount of IOPS around, but the P410 is likely bottlenecking you, as would a choice of using anything other than mirror vdevs in the ZFS config. I suspect you might have a RAIDZ-something here. Re: the iSCSI drops, check to see if your switch is reporting any high error/retransmit counts, and whether or not your client side is seeing the connection actually drop.
 

NickDaGeek

Cadet
Joined
Dec 2, 2019
Messages
8
WOW, thanks for all that, Honey Badger.

I will check what MPIO settings are on the server's MS Initiator and also see what LACP settings are on the switch (I inherited this lot when I took the job so still checking under the hood to see what I have in a lot of places) However if I understand you correctly I will have to remove the LAGG on the FreeNas as it will only be presenting a single IP address to the server, is that correct? If I recall correctly I do have a Raid Z pool as this is how it was set up before and being a noob to FreeNas I did not change it. Not sure exactly what RAID controller the server has I will look into it but I am fairly certain it's a SmartArray and yes I have used Dell PERC before and found it solid and reliable but never in a FreeNas box. If I get the time and budget to replace the SmartArray I will accept your offer of suggestions etc. and DM you before I do anything.

Howdy. I hear someone asked for the iSCSI guy. :)



@Elliot Dierksen is on the money here - iSCSI has its own load-balancing protocol called MPIO (MultiPath Input/Output). The short answer here is that LACP and MPIO don't mix. LACP takes several interfaces, bundles them together and presents a single IP address. MPIO takes those several interfaces, keeps them distinct with their own IP addresses, and shotguns the traffic across all of them.

For a long answer, I'll have to set up an MPIO resource similar to this thread.



"Yes" to all of it. The ciss driver used for the SmartArray P4x0 series is not particularly good under FreeBSD, ZFS doesn't like hardware RAID and vice versa due to the way ZFS throws large volumes of traffic in "transaction groups" and often makes RAID controllers barf, and having personally worked with the P410 in the HP G6 series, it's a pile of crap in my opinion. Thankfully a "proper HBA" in the form of the Dell PERC H200/H310, IBM M1015, or HP H220 is very affordable online. Swapping in-place though is going to be impossible though because your SmartArray has set each drive up as a RAID0. Sorry.

The good news is that if you can get it swapped you should get a decent bit of performance improvement. (I've got some other hardware suggestions, but rather than clutter this thread, maybe a new one. Throw me a DM if you do, the @ tag functionality seems to be more miss than hit these days.)



In the case of iSCSI specifically adding "more lanes" does change the limit if done with MPIO. To extend the metaphor, LAGG/LACP widen the highway. If you do it with MPIO, you'll not only widen the highway, but add a bigger on-ramp to get more cars on at a time.

The 8x1.2T SAS drives should throw a decent (for spinning disk) amount of IOPS around, but the P410 is likely bottlenecking you, as would a choice of using anything other than mirror vdevs in the ZFS config. I suspect you might have a RAIDZ-something here. Re: the iSCSI drops, check to see if your switch is reporting any high error/retransmit counts, and whether or not your client side is seeing the connection actually drop.
 

NickDaGeek

Cadet
Joined
Dec 2, 2019
Messages
8
Thanks RegularJoe that is worth knowing. I will check the switch. I know I have a LAGG on the FreeNas but not sure if I have LACP running or not on the switch. There is a far bit to look into about the L7 load balancing as I have no idea at all about what FreeBSD (is that what is behind FreeNas?) does about load balancing.

if you're seeing MAC addresses on other interfaces on your switch and it thinks there is flapping going on you do NOT have lag setup right. Is the connection active/active for LACP and does your switch support active/active LACP. You should see something on the switch side that says LACP is active and you have 4 members bound. I am accustomed to doing this on a Cisco switch and it works every time I do it. LACP is session based so once a user's session is on one interface it stays there, if the application creates hundreds of random tcp ports you may see the one users session balanced across more than one port. FTP is most likely going to hang out on one port and even if you have 10 users trying to FTP at the same time you might see one port with 5 users, one with 3, one with 2 and one with none. It depends on how your switch balances the traffic, if you can do L7 load balancing and FreeBSD supports it you might get better loading.
 

NickDaGeek

Cadet
Joined
Dec 2, 2019
Messages
8
Hi Elliot Dierksen, FreeNas is pure storage, it's a Veeam Repository and nothing else. Good suggestion about testing that. Is it as simple as just pulling one of the patch leads in the LAGG pair or is it a config disable thing?

I don't know if you are just using FreeNAS for storage, or if you are also using it as a hypervisor. I can't comment on the latter as I don't use the virtualization features of FreeNAS. I do think that temporarily disconnecting the extra links in your LAGG to see if that resolves the drop issues are resolved. That will tell you if there is a problem with the LAGG config.
 
Joined
Dec 29, 2014
Messages
1,135
Hi Elliot Dierksen, FreeNas is pure storage, it's a Veeam Repository and nothing else. Good suggestion about testing that. Is it as simple as just pulling one of the patch leads in the LAGG pair or is it a config disable thing?
Yes, it is as simple as pulling one of the cables. It SHOULDN'T cause any disruption, but I do that at a time when it won't cause any kitten punching if there is an interruption in service. :smile:
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,087
WOW, thanks for all that, Honey Badger.

I will check what MPIO settings are on the server's MS Initiator and also see what LACP settings are on the switch (I inherited this lot when I took the job so still checking under the hood to see what I have in a lot of places) However if I understand you correctly I will have to remove the LAGG on the FreeNas as it will only be presenting a single IP address to the server, is that correct? If I recall correctly I do have a Raid Z pool as this is how it was set up before and being a noob to FreeNas I did not change it. Not sure exactly what RAID controller the server has I will look into it but I am fairly certain it's a SmartArray and yes I have used Dell PERC before and found it solid and reliable but never in a FreeNas box. If I get the time and budget to replace the SmartArray I will accept your offer of suggestions etc. and DM you before I do anything.

Ah, it's an in-guest iSCSI mount. That's a different situation. Are you using iSCSI directly from the ESXi host as well, or just directly from the guests?

If you're only using it from the guests, then the iSCSI traffic is passing through the regular LAN uplinks of your host. MPIO here would likely be the wrong play, as you'll already have LACP/LAGG set up at the hypervisor-to-switch level, and MPIO will only see that single path. If the Veeam VM has multiple virtual NICs (or direct access to a physical one passed through) it will run over those, but that's less likely.

If it's a direct in-guest mount, RAIDZ is less bad because you don't have the additional layer of the VMFS filesystem to jumble things up. It's still block storage on RAIDZ (generally slower) but the more sequential patterns of writing and reading backups is less rough to handle.

Switching the SmartArray P410 (I'm pretty confident that's the model, since it's a G6) to a PERC H200/H310 series though will very likely require you to move all the data off, destroy, and recreate the pool. Because each drive is a RAID0 of itself, it has the little label from the SmartArray that says "I'm a one-drive RAID0" and the PERC won't understand that, even less so once the PERC has been reflashed into thinking it's an LSI SAS2008.

Let me know if there's any VMware-level iSCSI going on, that will determine what the network link will impact.
 

NickDaGeek

Cadet
Joined
Dec 2, 2019
Messages
8
Hi HoneyBadger

That was a blast from the past. I have only read of that trick of flashing a PERC to an LSI. I had forgotten that could even be done. Used to do a similar trick with cheap IDE controller cards to make them into hardware raid cards back in the day.

Yep 100 % correct on the Smart Array its a P410i

FreeNas is physical server not virtual and is pure storage no virtualisation.

iSCSI is from an ESXi hosted guest (no ESXi iSCSI that I can see, no protocol endpoints under storage). It's from a single VM outbound from the ESXi Host to a stacked pair of Cisco switches then to a separate G6 booting directly from FreeNas.

The VM is a Windows 2012 r2 based Veeam server and has a pair 10GBe Virtual NICs attached to two separate vSwitches , one for management the other for backup. The Backup VNIC goes to a Virtual Switch via a pair of physical 1 Gbe ports on a 4 port Intel 82571EB Gigabit (copper) NIC attached to it on the Host.

I would love to tell you what the physical Cisco switch port settings are for the other end of that pair of NICs and the other pair out from that switch to the FreeNas G6 LAGG but at this moment but I can not access the stacked pair. That looks like a console lead job or a switch reboot at the least. Will come back to you with details when I can.

cheers
Nick

Ah, it's an in-guest iSCSI mount. That's a different situation. Are you using iSCSI directly from the ESXi host as well, or just directly from the guests?

If you're only using it from the guests, then the iSCSI traffic is passing through the regular LAN uplinks of your host. MPIO here would likely be the wrong play, as you'll already have LACP/LAGG set up at the hypervisor-to-switch level, and MPIO will only see that single path. If the Veeam VM has multiple virtual NICs (or direct access to a physical one passed through) it will run over those, but that's less likely.

If it's a direct in-guest mount, RAIDZ is less bad because you don't have the additional layer of the VMFS filesystem to jumble things up. It's still block storage on RAIDZ (generally slower) but the more sequential patterns of writing and reading backups is less rough to handle.

Switching the SmartArray P410 (I'm pretty confident that's the model, since it's a G6) to a PERC H200/H310 series though will very likely require you to move all the data off, destroy, and recreate the pool. Because each drive is a RAID0 of itself, it has the little label from the SmartArray that says "I'm a one-drive RAID0" and the PERC won't understand that, even less so once the PERC has been reflashed into thinking it's an LSI SAS2008.

Let me know if there's any VMware-level iSCSI going on, that will determine what the network link will impact.
 
Top