iSCSI multipath not working correctly

Status
Not open for further replies.

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
We use a Freenas machine running on an Supermicro A1SAM-2750F, Intel Atom c2750, 16GB DDR3 EEC, LSI 2308 PCI-e 8x, 6x1TB WD Enterprise, 2x2TB WD Enterprise.

We have one NIC for management and freenas webinterface the other 3 we have connected via iSCSI to our esxi machine.
Spec esxi:
Supermicro 5019S-ML
Intel Xeon E3 3.6GHz
Intel i350 Nic
32GB DDR4 ECC
Intel SSD DC S3500 80GB MLC

We have configured our iSCSI to use Multipath and"Round robin" and we can see traffic flowing over all interfaces.
But we cant reach speeds over 1Gbit/s (112MB/s)

What can we have done wrong?

We tried enabling jumbo frames (mtu 9000) but with jumbo frames enabled in both esxi and freenas we got the following message in our freenas:
"no ping reply (nop-out) after 5 seconds dropping connection"

We also had problem finding our iSCSI devices in our esxi host with jumbo frames enabled.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Skip jumbo, it's a fix for a problem that doesn't exist on modern hardware. Make sure ESXi is set to switch paths after every block.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Exuse me but what do you mean?
Round robin is set to change paths after 1000 iops by default. Like jgreco said change ESXi to change paths after 1 iops.
The ESXi command is:
Code:
for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.xxxx`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done

Where naa.xxxx is the first few characters of your naa IDs.

Check the change with the command:
Code:
esxcli storage nmp device list


You do not need to reboot the host for the changes to take effect.
 
Last edited:

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
dudde, you should learn to understand the basics of the systems you use...

  1. esxcli storage nmp psp roundrobin deviceconfig set -d t10.FreeBSD_iSCSI_Disk______a0369f3d2b7c000_________________ -I 1 -t iops
  2. esxcli storage nmp device list
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
OK, kids, play nice. Consider it a moderator warning
 

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
It's okay i did found out what you mean.

I can take it, i actually just started playing with esxi yesterday
 

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
Well it almost works perfect.
using dd on a linux vm we get about 360MB/s - that's all good
but on our windows 2008 server we still only manage to get around 110MB/s
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Well it almost works perfect.
using dd on a linux vm we get about 360MB/s - that's all good
but on our windows 2008 server we still only manage to get around 110MB/s
What are you testing the windows server with?
 

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
File copy from one lun to another

We tried Crystal disk mark but i didnt agree those results would be accurate.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
just to get sure: the windows server runs as a vsphere guest with a non thin vmdk?
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
can you please post a screenshot of your full crystal reports test? also
 

Dudde

Explorer
Joined
Oct 5, 2015
Messages
77
Here's the image from Crystal Diskmark:
yVpdc7C.jpg


And here's the output from running dd on our linux vm
dd if=/dev/zero of=/mnt/test.dat bs=1024 count=500k
512000+0 records in
512000+0 records out
524288000 bytes (525 MB) copied, 2.60701 s, 201MB/s
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
with a queue depth of 32, your results show us that both links will be used.
on my systems, with a queue depth of 1 (that is the sequential part in your image) is also limitied to one nic.
 
Status
Not open for further replies.
Top