Freenas 11.x LACP/LAGG with Cisco Port Channel

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Hello,

I am doing a redesign of my Freenas as I have now a Cisco Nexus 5596UP with lots of 10GB ports now. There has been a lot of threads about LACP and laggs to Enterprise switches with feedback and some saying "dont do it". Its kinda of hard to follow whats the best practice as everyone's environment is unique. I have read the manuals and seen many You Tube videos with different opinions.

So I am sure I will get a few questions asked here and different opinions.

I have attached a PDF to illustrate the topology i wish to deploy.

Equipment:

ESXi Hosts

2 x ESXi 6.7 Hosts in VCenter 6.7 Cluster - No Spinning disks
Each ESXi Hosts - Dell R610 with 4 onboard 1GB Copper ports and 2 Port Intel X520-DA2

Switches
1 x Layer 3 Core Router - Routing for the entire network
1 x Cisco Nexus 5596UP - Layer 2 Only

Storage
Dell R710 with 4 onboard 1GB copper and 2 Port Intel X520-DA2 with Freenas 11.1 - Raidz2 - 24TB raw storage = 16TB of usable storage

Desired Topology:

VLANs

VLAN 99 = Management VLAN Untagged for each ESXi Hosts Port 1 ONLY on the Core Router.
VLAN 2-9 = Data, Voice, Wireless, etc will be untagged/tagged on the Core Router for networking of the VM traffic.

VLAN 10 = iSCSI VLAN only on the Cisco Nexus applied as a native VLAN on the LAGG connected to Freenas - Can this be done?
VLAN 20 = VMotion VLAN only on the Cisco Nexus

Core switch with 20GB LAGG to Cisco Nexus 5596UP = This is up and running with no issues.

Connections:

1. Connect Freenas to the Cisco Nexus with a 20GB LAGG on iSCSI VLAN 10 untagged/Native.

2. Connect the first 10GB port from each ESXi host to the Cisco Nexus on iSCSI VLAN 10 untagged/Native

3. Connect either a 1GB Copper port from each ESXi host to the Cisco Nexus on VMotion VLAN 20 untagged/Native

4. Connect a 1GB Copper port from each ESXi host to the Core Router Management VLAN 99 untagged/Native

5. Connect the second 10GB port from each ESXi host to the Core Router VLAN's 2-9 as the VM's will use VCenter distributed network switching either untagged/tagged.

Note: I forgot to include in the diagram the second 10GB port from the ESXi hosts for the VM traffic that should be terminated on the Core Router.

So my main question is if Freenas 11.x is capable and stable to use LACP 20GB LAGG to the Cisco Nexus?

Thanks,
 

Attachments

  • Freenas to Cisco.pdf
    181.2 KB · Views: 376
Joined
Dec 29, 2014
Messages
1,135
Yes, that can absolutely be done. I was doing it until recently (switched from 2 x 10G LAGG to 1 x 40G). The question is the load balancing method that the LAGG/port channels use. It will work fine if you configure it properly, but won't get the best distribution between the links if load balancing isn't correct. It is important to remember that LAGG/port channel is load balancing, NOT bonding. The maximum bandwidth any 1 conversation will get is the bandwidth of 1 physical link.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Ok, perfect. Wish I had a 40GB QSFP+ port.. Would you recommend creating the LAGG in CLI versus GUI in your experience?

Storage is 6 x 4TB Drives setup as 3 x 2 mirrored vdev's

Also, since I will have a combination of VM's and personal storage, is mirrored vdevs in one ZFS pool the recommended design or should I use RAIDz2. I read RAIDz2 will have poor IOPS performance?

Thanks.
 
Joined
Dec 29, 2014
Messages
1,135
Would you recommend creating the LAGG in CLI versus GUI in your experience?
I wouldn't know anything about the GUI. I am a grumpy old CLI kind of guy. :)
Also, since I will have a combination of VM's and personal storage, is mirrored vdevs in one ZFS pool the recommended design or should I use RAIDz2.
The overwhelming consensus is that mirrors give you more IOPS than any RAIDZ configuration. I use RAIDZ because my stuff is mostly a lab, and I don't want to lose that much space. If I were doing production VM's, I would definitely do mirrors.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
Ya man, I am a CLI guy too. I have VM's in Prod and in a Lab as well. I may have to replace the 4TB with 6TB just to get the additional storage for my Prod and convert the lab to Raidz2.

BTW, any drawback with having 6 x 4TB drives in one vdev mirror or keep it to 3 x 2 vdev mirrors?
 
Joined
Dec 29, 2014
Messages
1,135
BTW, any drawback with having 6 x 4TB drives in one vdev mirror or keep it to 3 x 2 vdev mirrors?
I don't use a lot of mirrors, but I think you would have to have multiple vdev's. I THINK if you put 6 drives in a single mirror vdev, that would give you the size of 1 drive with 5 copies. The way to get more IOPS is to have more vdevs, so your 3 x 2 mirrored vdevs would give you the most IOPS.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
One question. Would you separate the VM's onto their on Pool with mirrored vdevs for the IOPS and Personal data on a different Pool with Raidz2. In your design i see you have 2 pools D2700 and D2600 with spare drives?
 
Joined
Dec 29, 2014
Messages
1,135
Would you separate the VM's onto their on Pool with mirrored vdevs for the IOPS and Personal data on a different Pool with Raidz2. In your design i see you have 2 pools D2700 and D2600 with spare drives?
If I were running production VM's, yes. I only have 3 VM's that you could call production (mail server, vcenter, powerchute appliance) and they aren't very demanding. I keep everything in a single pool on my main FreeNAS. The secondary one is mostly just a backup. It seemed logical to me to have the different external storage units in their own pool. Mirrors are the overwhelming consensus for any real VM storage workload.
 

marlonc

Explorer
Joined
Jan 4, 2018
Messages
75
My Dell R710 has 6 x 4 TB drives, so I was thinking of replacing 2 x 4TB drives with 2 x 8TB drives as mirrored vdevs just for my 16 VM's for Pool 1 and then create 2 x mirrored vdevs for the remaining 4TB drives for storage which leaves me no spare drives. Man I am trying to avoid getting a Dell R720XD with 12 x 4 TB drives..lol
 
Top