the Adaptor I have is listed above.
I have cisco nexus 5020 10GB switch, my esxi 6 hosts have intel 520DA dual 10gb adaptors.
I'm using vmware to load balance the links with iscsi, so no lacp or anything like that.
We keep our vms in thick provisioned state, usually when moving from local disks on the host to the icsi storage we would see rates of about 600 mbit/s (around 60% usage on each link) right now we're seeing about 50-100m.bit per link.
I did have some values for l2arc turned up to try and utilize the intel 750 pci that we have, but i've found if I reset it to the default tunables setting it appears to work faster... not sure. I have been slammed with work the past few days and obviously not able to put in the proper testing.
My problem is that all I did was update to 9.3.1 and upgrade the firmware on my lsi adaptors to be compliant.
Sounds like it's just me having the problem so i'll have to migrate the data off one of the servers and do some testing.
Let me fix this for you since you didn't read the link from above.
the Adaptor I have is listed above.
I have cisco nexus 5020
10Gb switch, my esxi 6 hosts have intel 520DA dual
10Gb adaptors.
I'm using vmware to load balance the links with iscsi, so no lacp or anything like that.
We keep our vms in thick provisioned state, usually when moving from local disks on the host to the icsi storage we would see rates of about 600
megabytes/s (around 60% usage on each link) right now we're seeing about 50-100
megabytes per link.
I did have some values for l2arc turned up to try and utilize the intel 750 pci that we have, but i've found if I reset it to the default tunables setting it appears to work faster... not sure. I have been slammed with work the past few days and obviously not able to put in the proper testing.
My problem is that all I did was update to 9.3.1 and upgrade the firmware on my lsi adaptors to be compliant.
Sounds like it's just me having the problem so i'll have to migrate the data off one of the servers and do some testing.