Hey folks,
I have been playing with my FreeNAS box for a little while now and am quite happy. I have added some new equipment and am playing around with iSCSI as VM storage for my ESXi whitebox. I bought a new 16 port switch that supports jumbo frames, LACP, etc, as well as two more dual port Intel server NICs. I have installed one NIC in the FreeNAS box and one in the ESXi box so that I can have two dedicated gigabit ports for iSCSI traffic. I have created a file extent on my existing pool for testing purposes (I have an IBM M1015 on order so I can create a dedicated RAID 10 pool from 4 1TB drives I have laying around) which is connected to my ESXi box. ESXi is configured for round robin and both paths are active and being used. My issue is that when I transfer VMs between the local SSD datastore on the ESXi box and the file extent on the FreeNAS box I am only seeing about 200 - 400mbps across both interfaces, or 100 - 200mbps per interface. I know that I won't get 2gbps but this is much lower than I was expecting. My disk subsystem isn't the issue as I am routinely able to get 950mbps with CIFS on my other NICs.
I know there are a couple of different ways to configure ESXi for iSCSI MPIO so I was wondering if anyone could give me some pointers on how they configured their systems. Please feel free to ask config questions, I'm just not sure what would be relevant here.
Thanks,
I have been playing with my FreeNAS box for a little while now and am quite happy. I have added some new equipment and am playing around with iSCSI as VM storage for my ESXi whitebox. I bought a new 16 port switch that supports jumbo frames, LACP, etc, as well as two more dual port Intel server NICs. I have installed one NIC in the FreeNAS box and one in the ESXi box so that I can have two dedicated gigabit ports for iSCSI traffic. I have created a file extent on my existing pool for testing purposes (I have an IBM M1015 on order so I can create a dedicated RAID 10 pool from 4 1TB drives I have laying around) which is connected to my ESXi box. ESXi is configured for round robin and both paths are active and being used. My issue is that when I transfer VMs between the local SSD datastore on the ESXi box and the file extent on the FreeNAS box I am only seeing about 200 - 400mbps across both interfaces, or 100 - 200mbps per interface. I know that I won't get 2gbps but this is much lower than I was expecting. My disk subsystem isn't the issue as I am routinely able to get 950mbps with CIFS on my other NICs.
I know there are a couple of different ways to configure ESXi for iSCSI MPIO so I was wondering if anyone could give me some pointers on how they configured their systems. Please feel free to ask config questions, I'm just not sure what would be relevant here.
Thanks,