FreeNAS Backup repository server - need a sanity check

Status
Not open for further replies.

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Hi All,

Have been trying to get together a spec for a FreeNAS server using some of the resources you guys have on here and others on the net and would be very grateful if people could check over my spec / rationale for hardware choices and let me know if what I want to do is sensible / if certain things are over or under-specced.


Primary use: Daily/Weekly Veeam VM backup data, (1 to 2TB daily changed data currently around 25TB total but want more)

Secondary Uses: archived / template VM’s from ESXi (none running from this storage server just stored there) , ISO’s for use with VM’s

Chassis : SuperMicro rackmount 4U 24x 3.5” drive slots with 2X 2.5” slots (initially without all slots filled so room to add more) – The reseller said the backplane is ‘fully expanded’

Motherboard: SuperMicro Dual Xeon X10-DRL-I

CPU’s: 2X Intel Xeon E5-2623 v4 – quad core @ 2.6Ghz - had spoken to a Vendor who mentioned faster cores are better than more cores for FreeNAS so picked the 4 Core 2.6Ghz ones – had thought with a Veeam VM (which can use lots of CPU) on the FreeNAS server and FreeNAS itself would be wise to have 2 –

RAM: 128GB ECC (4X 32GB DDR4 PC4 19200 registered – wanted to have enough for ARC caching and some (around 16GB) for the Veeam VM (I don’t have specific part numbers for these but were listed compatible by the SuperMicro reseller we got the quote from)

SLOG device: 2 X 240GB Samsung SM863a SATA 2.5”- Has Power Loss protection and Mirrored just in case of device failure: Rated at 450MB/sec write speed. I’m guessing this would limit us to around 480MB/sec write speed for our pool which wouldn’t be a limiting factor as 2X 1Gbe will max out around 240MB/sec (240 only when using 2 separate network transport streams otherwise half that) we don’t currently have 10Gbe and adding it probably not going to be a possibility for the moment – but if we manage 480MB/sec writes that should be enough. Was also going to overprovision them as I’d read you can only use a max of around 16GB/ 5 seconds of writes?

Drives: 12 X 8TB Seagate Enterprise (Exos) 7200RPM SATA (6Gb/s) drives ST8000NM0055

Storage controller: Broadcom 9300-4i 4P-int 12Gb/sec PCI-e 3.0 8X

Boot drives: 2X mirrored decent quality USB sticks

PSU: Dual 920W platinum’s to cover PSU failure (power is expensive where the server will be so higher efficiency is worth additional cost) – I don’t have a model number for these but they were specced by the super micro reseller

Drive Config – One pool comprising two X 6 drive Vdev’s with RAIDZ2 (data reliability being most important here) giving 56TB useable with enough slots to add another one or two 6 drive Vdev’s (perhaps at the same time or one then another later as required) to the pool as space / IO performance is required

Veem backup, we currently have two of the Veeam backup ‘target’ VM’s where the data is stored and were wondering if some or all of the load could be moved to a Linux VM running on the FreeNAS box itself which would then access the same NFS share as the ESXi hosts do to take compute load off of our ESXi hosts and to benefit from Veeam’s compression of data that is being transported between source VM and Veeam Target.

Data accessed via : ESXi access over NFS (so all synchronous writes) on dual 1Gb Ethernet, / Veeam target VM running on the FreeNAS server

The read write workload for Veeam backups currently is currently around 66% write and 33% read so was wondering if we could get a rough idea what sort of throughout / performance we’d be able to get bearing this in mind.

As noted above we need to keep the backup data reliably which is why I was thinking RaidZ2 with enterprise class drives to be able to survive any 2 drive failures and the mirrored SLOG device although I did read somewhere mirroring for SLOG may not be required

Sorry if I’ve missed anything – Would love to know your thoughts and thanks for reading

P.S If we end up setting this up am happy to report on benchmarks / test results to give others some info to go on

--C2Opps
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Motherboard: SuperMicro Dual Xeon X10-DRL-I
Why not use a X9 gen board with DDR3 RAM to save more money? Also why do you need a DP board? For your use case of simply backing data up, I doubt any process will be CPU intensive. Are you planning on a pull from or push to this backup server?
CPU’s: 2X Intel Xeon E5-2623 v4 – quad core @ 2.6Ghz - had spoken to a Vendor who mentioned faster cores are better than more cores for FreeNAS so picked the 4 Core 2.6Ghz ones – had thought with a Veeam VM (which can use lots of CPU) on the FreeNAS server and FreeNAS itself would be wise to have 2 –
Here's where I am confused. Are you going to run the Veeam server on this machine, or are you planning on just backing up the Veeam data on this machine?
Boot drives: 2X mirrored decent quality USB sticks
Might as well use a SSD for boot. You'd have the space in the chassis.
Storage controller: Broadcom 9300-4i 4P-int 12Gb/sec PCI-e 3.0 8X
Why a SAS3 card? Backup servers usually don't require very high speed controllers? Wouldn't a SAS2 suffice?
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Why not use a X9 gen board with DDR3 RAM to save more money? Also why do you need a DP board? For your use case of simply backing data up, I doubt any process will be CPU intensive. Are you planning on a pull from or push to this backup server?

Here's where I am confused. Are you going to run the Veeam server on this machine, or are you planning on just backing up the Veeam data on this machine?

Might as well use a SSD for boot. You'd have the space in the chassis.

Why a SAS3 card? Backup servers usually don't require very high speed controllers? Wouldn't a SAS2 suffice?

Hey Inxsible - thanks for the reply -
If we do go the FreeNAS route this would likely be in for a service life of around 5 years and wanted it to be expandable later (more drives in this chassis and potentially an external chassis) this may require more RAM/CPU etc and if we end up adding RAM later new DDR3 will start to get a decent amount more expensive as its less popular, thinking a slightly higher cost now for a newer board that can be added to cheaper later is best?
Also would prefer to run a Linux VM with Veeam software on the FreeNAS server itself - current targets can use a lot of CPU so may take up half of those cores whilst busy
As for SSD's the 2X 2.5" slots are taken up by the SLOG devices
And as for SAS3 card that was just in the vendor list - they didn't have many in their tested list that were HBA only - it wasn't a bit part of the cost either ....
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
So is this a pre-built system that you are buying -- including chassis, board,cpu, ram, and HBA?

You might want to research a bit about whether FreeNAS supports Broadcom controllers. Most here tend to use IBM M1015 or Dell H200/220 or 310. Admittedly they are SAS2. I am not sure of many SAS3 HBAs that are supported in FreeNAS.
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
So is this a pre-built system that you are buying -- including chassis, board,cpu, ram, and HBA?

You might want to research a bit about whether FreeNAS supports Broadcom controllers. Most here tend to use IBM M1015 or Dell H200/220 or 310. Admittedly they are SAS2. I am not sure of many SAS3 HBAs that are supported in FreeNAS.

Sort of - its from a reseller where you can chose each of the parts from a drop down list - Good point on SAS3 will check into that as want things to be nice and supported/stable. I'd sort of got the feeling most liked LSI HBA's on here or is that no longer the case? (although Dell seem to be rebadged LSI a lot of the time)

If the HBA was changed to a supported one do you have any thoughts on the storage I/O performance we might be able to get out of this with the read write mix i mentioned in the initial post?
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
I'd sort of got the feeling most liked LSI HBA's on here or is that no longer the case? (although Dell seem to be rebadged LSI a lot of the time)
LSI is still king here. and IBM M1015, Dell 220, Dell 310 and even the HP ones need to be flashed into IT mode. Once flashed, they are exactly like the LSI cards.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
CPU’s: 2X Intel Xeon E5-2623 v4 – quad core @ 2.6Ghz - had spoken to a Vendor who mentioned faster cores are better than more cores for FreeNAS so picked the 4 Core 2.6Ghz ones – had thought with a Veeam VM (which can use lots of CPU) on the FreeNAS server and FreeNAS itself would be wise to have 2 –
Veeam will use all the cores and cycles you can throw at it if your using heavy compression. Also during backup roll-ups.
SLOG device: 2 X 240GB Samsung SM863a SATA 2.5”- Has Power Loss protection and Mirrored just in case of device failure: Rated at 450MB/sec write speed. I’m guessing this would limit us to around 480MB/sec write speed for our pool which wouldn’t be a limiting factor as 2X 1Gbe will max out around 240MB/sec (240 only when using 2 separate network transport streams otherwise half that) we don’t currently have 10Gbe and adding it probably not going to be a possibility for the moment – but if we manage 480MB/sec writes that should be enough. Was also going to overprovision them as I’d read you can only use a max of around 16GB/ 5 seconds of writes?
Unless your running VMs from this storage the only other time this may be helpful is if your Veeam repository resides on a FreeNAS iSCSI or NFS share. If your using CIFS it won't help with anything.
Veeam backup ‘target’ VM’s where the data is stored
This is a bad idea. It will make moving data and disaster recovery more difficult. I would use a jail with perl installed (may take a bit of fiddling) this way the vbk files are dropped directly on ZFS and you can disable backup consistency checks or at least reduce the frequency. You could also use CIFS/SMB as your repository and get the same effect. As this is all running on the same machine I can't imagine it makes much of a difference.
Data accessed via : ESXi access over NFS (so all synchronous writes)
But you won't be writing from the ESXi hosts. Moving ISOs, etc.. can be done from SFTP or CIFS neither of which require sync writes.
mirroring for SLOG may not be required
This would only be needed in two cases, one if the SLOG device failed during a hard crash. Two, if you can't afford to lose the performance in the even one fails.

You can use NFS direct Access in Veeam to read your VMDKs etc... without loading your hosts/network. With a FreeNAS jail setup to work as a Veeam Linux Repository, your backups will flow from your NFS to the Veeam VM to the FreeNAS dataset configured for your jail. this gives you direct access to the backup files, eliminates the virtual disk driver and NTFS layers, provides ZFS snapshots and zfs send/receive, and if you do need to grow the dataset for your backups, you don't need to also extend the NTFS filesystem.

Im sure I missed things...
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Just need to mention, jails are in limbo on FreeNAS at this point. 11.2 with the new UI and IOcage should bring fully supported jails. That is listed as due in about a month but I wouldn't gamble on that.
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Veeam will use all the cores and cycles you can throw at it if your using heavy compression. Also during backup roll-ups.

Unless your running VMs from this storage the only other time this may be helpful is if your Veeam repository resides on a FreeNAS iSCSI or NFS share. If your using CIFS it won't help with anything.

This is a bad idea. It will make moving data and disaster recovery more difficult. I would use a jail with perl installed (may take a bit of fiddling) this way the vbk files are dropped directly on ZFS and you can disable backup consistency checks or at least reduce the frequency. You could also use CIFS/SMB as your repository and get the same effect. As this is all running on the same machine I can't imagine it makes much of a difference.

But you won't be writing from the ESXi hosts. Moving ISOs, etc.. can be done from SFTP or CIFS neither of which require sync writes.

This would only be needed in two cases, one if the SLOG device failed during a hard crash. Two, if you can't afford to lose the performance in the even one fails.

You can use NFS direct Access in Veeam to read your VMDKs etc... without loading your hosts/network. With a FreeNAS jail setup to work as a Veeam Linux Repository, your backups will flow from your NFS to the Veeam VM to the FreeNAS dataset configured for your jail. this gives you direct access to the backup files, eliminates the virtual disk driver and NTFS layers, provides ZFS snapshots and zfs send/receive, and if you do need to grow the dataset for your backups, you don't need to also extend the NTFS filesystem.

Im sure I missed things...
Thanks Kdragon !
Ref the Veeam VM on the FreeNAS server itself was thinking of making its backup repository direct on ZFS via NFS (i tested NFS is accessible from a VM on FreeNAS using my test environment) not in virtual disks sitting on ZFS so the VBK files would be directly on ZFS - either that or as you say a Veeam VM running on ESXi and the repository an NFS direct access . Would prefer not to use Jails as don't think this would be a supported by Veeam and want to stay in a supported config.

Also if i move a VM to or from FreeNAS for archiving or deployment to primary storage this will use NFS and also ISO's mounted to ESXi VM's would be over NFS too - the FreeNAS box won't be accessible from GuestOS level on VM's just via ESXi and this is 99% going to be NFS
Re SLOG failure scenarios - haven't been able to find (as I'm sure its almost impossible to estimate) what the effect would be if a SLOG failed during hard crash - for the cost of another SSD i'd prefer not to find out even if its very unlikely - also with two as you say would keep performance up during a failure of one which has to be good

Do you have a view on what I/O performance I'd be able to get with the mentioned number of disks/ ZFS config with a real world I/O load that is 70% writes / 30% reads - this has been the most difficult thing to find - I think this too is where a SLOG could help as writes can more easily wait a few seconds on the SLOG whilst reads are happening and then be written out but need to get an idea of whether it would be able to keep up with our existing backups or i need more disks.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Would prefer not to use Jails as don't think this would be a supported by Veeam
Last time I had to ask Veeam anything about linux targets, they were less than helpful. That was a few years ago when 8.0 was the latest so I'm sure it's better now but I understand wanting to be in a supported config.
haven't been able to find (as I'm sure its almost impossible to estimate) what the effect would be if a SLOG failed during hard crash
Basically if the data has not been flushed to the pool from the SLOG that data is lost. The bad part is that anything in the SLOG has been reported as written to disk and therefore can easily croupet your ZFS filesystem. That's why PLP drives are so important.
Do you have a view on what I/O performance I'd be able to get with the mentioned number of disks/ ZFS config with a real world I/O load that is 70% writes / 30% reads
My advice here is to base your math on standard RAID calculations and test from there. Depending on your case, it will likely outperform estimations. A starter on IOPS calculations: Calculate IOPS in a storage array By: Scott Lowe (He literally wrote the book on vmware)
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Last time I had to ask Veeam anything about linux targets, they were less than helpful. That was a few years ago when 8.0 was the latest so I'm sure it's better now but I understand wanting to be in a supported config.

Basically if the data has not been flushed to the pool from the SLOG that data is lost. The bad part is that anything in the SLOG has been reported as written to disk and therefore can easily croupet your ZFS filesystem. That's why PLP drives are so important.

My advice here is to base your math on standard RAID calculations and test from there. Depending on your case, it will likely outperform estimations. A starter on IOPS calculations: Calculate IOPS in a storage array By: Scott Lowe (He literally wrote the book on vmware)

So would the Veeam VM on FreeNAS accessing its backup storage respository via Direct NFS onto the ZFS pool be an ok route to take do you think?

Re SLOG - yes that would be terrible!

ah Scott Lowe - yes I've used some of his material for the VCP exam - I will go over the IOPS article and see if I can make some accurate predictions
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Just to be clear on the path:
Code:
[Primary VM NFS] -> [Veeam VM] -> [FreeNAS NFS]
				 [FreeNAS bhyve]

that should be fine and in this case the SLOG does very much make sense. I'll admin though, I have almost no NFS experience. I almost always use block storage when I can. Fewer quirkes with cross compatibility and vendor implementations. (Lets not talk talk about iSCSI multipath and Dell...)
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Just to be clear on the path:
Code:
[Primary VM NFS] -> [Veeam VM] -> [FreeNAS NFS]
				 [FreeNAS bhyve]

that should be fine and in this case the SLOG does very much make sense. I'll admin though, I have almost no NFS experience. I almost always use block storage when I can. Fewer quirkes with cross compatibility and vendor implementations. (Lets not talk talk about iSCSI multipath and Dell...)
Ah no sorry / sort of - thanks for clarifying - the Primary VM is sitting on an ESXi Non NFS datastore - I was getting confused about what NFS direct path meant - I thought you meant that the Veeam VM accessed its primary backup repository via NFS directly as opposed to it the Veeam VM having VMware VMDK disks sitting on and NFS data store. Apologies for the quick diagram- Fibre channel is to our storage array


What i meant was - my preferred option is Veeam as a VM running on the FreeNAS box as it takes the load off our ESXi hosts
Running Veeam as a VM on FreeNAS via NFS
ZFSVeeam1.png
Different option (less preferred) Veeam as VM On ESXi saving backup data to FreeNAS via NFS
ZFSVeeam2.png
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I don't understand your diagram. The VMs connect to the ESXi hosts via FC?!? lets try this..
You can't use NFS and FC on the same lun. Are you saying the Veeam VM at the primary site is mounting the FC storage and sharing it via NFS? that doesn't make any sense either.
Both of your diagrams show a Veeam instance on ESXi2. If that has direct FC access via either NPIV or passthrough just use that and backup to a local repo and the have Veeam replicate that to the remote site over NFS.

I would draw it out but I dont have access to visio :(
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Apologies Kdragon, I've confused everything - your original diagram was correct except for the source VM part.

Currently (pre FreeNAS) we have a Veeam VM at the primary site accessing the source VM's VMDK's on an ESXi datastore via Fibre Channel to our storage array, Veeam sends this across to the other site via its own protocol and then another Veeam VM (the target) puts this into a Veeam repository - the repository is currently just VMware disks on an another array.

This works all well and good, however to save space on that array which could be used for other things we wanted to put in a FreeNAS server - the question then is do we

Option 1) Install a Veeam VM on FreeNas (using bhyve as you said) that backs up data via NFS to the same FreeNAS server it is sitting on.
This takes the Veeam CPU / network load off of the secondary sites ESXi servers which can then be used for other VM's. This is our preferred option.

Option 2) We setup a FreeNAS box and use our existing Veeam Target VM and just configure that VM to save backups to the FreeNAS storage via NFS - we've used Veeam with NFS backups stores sitting on a different storage device in the past and this works however this means additional resources used on our ESXi hosts and potentially reduces total backup throughput as the network link between the ESXi host with Veeam on and FreeNAS will become the bottleneck - (especially during backup transforms)
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Ok so if you have the enterprise plus (or whatever its call for the top tier) you may benefit having the second Veeam at the second site as then you can use the WAN acceleration. Otherwise, just setup NFS on FreeNAS and have Veeam use that as a second repo. If thats all this is used for, you can safely cut the RAM in half. For reads (template VMs, ISOs, etc) you will have plenty of throughput. Should be in the 10gb (1GB/s) ballpark. With that said, you board only has 2 1gb ports so a total of ~100MB/s and you don't mention any other network cards.

Your network will be the biggest bottleneck by far.
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
Ok so if you have the enterprise plus (or whatever its call for the top tier) you may benefit having the second Veeam at the second site as then you can use the WAN acceleration. Otherwise, just setup NFS on FreeNAS and have Veeam use that as a second repo. If thats all this is used for, you can safely cut the RAM in half. For reads (template VMs, ISOs, etc) you will have plenty of throughput. Should be in the 10gb (1GB/s) ballpark. With that said, you board only has 2 1gb ports so a total of ~100MB/s and you don't mention any other network cards.

Your network will be the biggest bottleneck by far.

having Veeam at just one side would just add additional load on the inter site link (remember 30% read 70% write between the Veeam Target and its storage)
Anyway seems like Veeam as a VM on FreeNAS would work

Regarding network performance yes 120MB/sec maximum for each network stream and potentially more with multi clients if we use LACP (I have read and understood Jgreco's thread on LACP here )

If it can do that much then it should be ok however are you certain it would be able to to do up to 1GB/sec (internally on FreeNAS) as the 480MB/sec of the SLOG should limit (sync) write speeds to at most 480MB/sec ? also during a mixed read write workload as Veeam is, won't the sustained throughput drop a reasonable amount lower than that? - again if FreeNAS itself (ignoring network bottleneck) would be able to do 120MB/sec random read and 120MB/sec random write (sustained) then that should be enough

I was unable to do calcs based on Scott Lowe's formula as can't get the max sustained IOPS figures for the drives from the spec sheet / can't work them out from the spec sheet either and Seagate weren't sure when i rang them.

After i wrote above however i did find some info from someone else who used 4TB drives (drives which were 23% slower throughput than the ones i speced and came out with the below)
He also didn't mention a SLOG device so likely that speeds could be much better with a SLOG that isn't on the main pool of disks.

6x 4TB, raidz2 (raid6), 15.0 TB, w=429MB/s , rw=71MB/s , r=488MB/s
12x 4TB, 2 striped 6x raidz2, 30.1 TB, w=638MB/s , rw=105MB/s , r=990MB/s
(https://calomel.org/zfs_raid_speed_capacity.html)

I'm sure the above should be fine and given that the 2X 1GB/s links will be the bottleneck for our workload but this should be enough for most things -
about the 2X 1GB links was thinking of getting another NIC with 4X 1GB ports so we can have a 2 link port channel to each of our two switches, is there a recommended NIC for this and recommended LSI HBA as when I've had a look at hardware recommendations on here the docs seem a couple of years out of date?
Thanks for all your help with this :smile:

---C2
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Might as well use a SSD for boot. You'd have the space in the chassis.

I'm going to second Inxsible here. I've had nothing but bad luck with USB thumb drive boot devices, and I've just been running at home for a couple months. It's not just the lack of wear levelling, but the HCI support appears to interact poorly with ZFS.

Small 60Gb SATA SSD's can be found for $30 USD.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
having Veeam at just one side would just add additional load on the inter site link (remember 30% read 70% write between the Veeam Target and its storage)
Anyway seems like Veeam as a VM on FreeNAS would work
True for several reasons. No WAN acceleration, roll-ups happen over the network instead or locally, etc...
(remember 30% read 70% write between the Veeam Target and its storage)
The r/w IO should not be an issue until your running incremental roll-ups at the end of each job/vm (depending on how you setup the jobs) and at that point if you have Veeam on both ends this process does not involve the network (other than job manager/reporting). It's all local to the Veeam VM/FreeNAS host. Ok so it uses that network but not your WAN links.
If it can do that much then it should be ok however are you certain it would be able to to do up to 1GB/sec (internally on FreeNAS) as the 480MB/sec of the SLOG should limit (sync) write speeds to at most 480MB/sec ? also during a mixed read write workload as Veeam is, won't the sustained throughput drop a reasonable amount lower than that? - again if FreeNAS itself (ignoring network bottleneck) would be able to do 120MB/sec random read and 120MB/sec random write (sustained) then that should be enough
If your using NFS and you SLOG is limited to 480MB/sec, then that may be a bottleneck as well.
...is there a recommended NIC for this and recommended LSI HBA...
On the NIC, use intel server grade NICs. Nothing else. As for the HBA, almost any LSI true HBA will be fine as long as its SAS2. The case where this is not true is with SSDs. Personally the card I use is based on the LSI 2008 chip. Its was cheap and works darn well for my needs.
I'm sure the above should be fine and given that the 2X 1GB/s links will be the bottleneck for our workload but this should be enough for most things -
about the 2X 1GB links was thinking of getting another NIC with 4X 1GB ports so we can have a 2 link port channel to each of our two switches
o_O I thought the Veeam boxes were at different sites? How will you slap in a new card and gain more bandwidth? Or are you just talking about redundant links to the switch fabric at the remote site?
 

C2Opps

Dabbler
Joined
Jun 11, 2018
Messages
25
@kdragon75
480MB/sec write is plenty for what we need :)
Ok will look at some Intel 4 port 1GB/s nics
If we just have 2 SSD's for redundant SLOG will an LSI SAS2 HBA be fine though (i was planning on running just one HBA for all drives (2x SSD SLOG and 12X mechanical )?
And yes Veeam boxes at different sites -and yes 4 links for redundancy gaining bandwidth of two 1GB NICS in a LACP but then thats only to 1 of the two switches, so if that switch dies = sad times, so have two links in LACP to one switch and 2 in LACP to the other.
 
Status
Not open for further replies.
Top