I have set up about 5 different FreeNAS boxes connected to ESXi by now and have followed the same procedure to set them up with MPIO, but no matter what I do I can only get the throughput of 1Gb/s to any of the boxes.
Here's what I've done:
ESXi:
iSCSI C: 10.0.3.41
iSCSI D: 10.0.4.41
FreeNAS:
iSCSI 1 igb0: 10.0.3.27
iSCSI 2 igb1: 10.0.4.27
connections on esxi:
Network speed:
(at 12:35 I switched it from last used path to round robin)
Are there settings that I should be tweaking to make this go faster? Right now I'm just using vMotion to test the speed. I realize in this example it wasn't maxing out the single connection before switching, so maybe this vMotion is just a bad example, but this is a pattern so far. I will work on a better benchmark to show the difference here, but I am still open to suggestions.
Here's what I've done:
- Set up two NIC on FreeNAS on different subnets with MTU 9000
- Set up two NIC on ESXi on different subnets with MTU 9000 with iscsi port binding
- Enable Round Robin (Active I/O on both Nics)
- Verify connectivity to both NIC
- Verify both paths are active and working
- Set RR IOPS to 1
ESXi:
iSCSI C: 10.0.3.41
iSCSI D: 10.0.4.41
FreeNAS:
iSCSI 1 igb0: 10.0.3.27
iSCSI 2 igb1: 10.0.4.27
connections on esxi:
Code:
[root@esxi-02:~] esxcli network ip connection list | grep 10.0.3 tcp 0 0 10.0.3.41:27551 10.0.3.27:3260 ESTABLISHED 3615380 newreno vmm0:Zimbra_Archive tcp 0 0 10.0.3.41:43698 10.0.3.25:3260 ESTABLISHED 0 newreno tcp 0 0 10.0.3.41:427 0.0.0.0:0 LISTEN 34525 newreno udp 0 0 10.0.3.41:123 0.0.0.0:0 33892 ntpd [root@esxi-02:~] esxcli network ip connection list | grep 10.0.4 tcp 0 0 10.0.4.41:10656 10.0.4.27:3260 ESTABLISHED 0 newreno tcp 0 0 10.0.4.41:23905 10.0.4.25:3260 ESTABLISHED 32806 newreno idle0 tcp 0 0 10.0.4.41:427 0.0.0.0:0 LISTEN 34525 newreno udp 0 0 10.0.4.41:123 0.0.0.0:0 33892 ntpd
Network speed:
(at 12:35 I switched it from last used path to round robin)
Are there settings that I should be tweaking to make this go faster? Right now I'm just using vMotion to test the speed. I realize in this example it wasn't maxing out the single connection before switching, so maybe this vMotion is just a bad example, but this is a pattern so far. I will work on a better benchmark to show the difference here, but I am still open to suggestions.