ISCSI Multipath, speed is the same with one or two paths active

Status
Not open for further replies.

danypd69

Dabbler
Joined
Mar 20, 2013
Messages
18
I have installed a freebsd vm and configured the iscsi initiator for a single connection; it works (a bit slow but it just a test)
How can I enable multipath ?

This is my /etc/iscsi.conf

Code:
nas {
        initiatorname   = test
        TargetName      = iqn.2011-03.org.example.istgt:freenas
        TargetAddress   = 192.168.10.1:3260,1
}
 

anpilog

Cadet
Joined
Apr 3, 2013
Messages
4
Hi danypd69,

Have you solved this problem?

I have identical case.

Excep:
Target: FreeNAS 8.3.0
Initiator: XenServer 6.0.2

I have three Gb links between FreeNAS and XenServer and overall bandwidth is limited to 1Gb/s.
All simptoms are exectly same.

I've checked network traffic and it's flow through all interfaces evently.
 

danypd69

Dabbler
Joined
Mar 20, 2013
Messages
18
No I have not solved it yet.
I will try to add a new nic to the server to see if something changes.
 

anpilog

Cadet
Joined
Apr 3, 2013
Messages
4
Yeah...

I saw few posts on citrix forum with same problem and no sollution.
I think extra NIC would not help because each NIC is able to provide 1Gb bandwidth simultaneously.

Any way please post here sollution if you will find it.

Thanks
 

danypd69

Dabbler
Joined
Mar 20, 2013
Messages
18
At this point I would like to know if someone is getting more than 1Gb/sec from freenas and 2 interfaces; if yes please share the configuration of your initiator.
I have done tons of tests, changed ethernet, changed switch and cables but I'm unable to make it work.
 

anpilog

Cadet
Joined
Apr 3, 2013
Messages
4
I also did a lot of experiments with similar results.
In my experiment I have 3 interfaces.
Highest speed for one initiator is 110-120MB/s.

And here is what I have to add.
Two interfaces - 90-100MB/s
One interface - 60-80MB/s

Fourth interface doesn't increase bandwidth.

Interesting that when I run speed test from two initiators simultaneously (connected by three interfaces) each of them runs at 90-100MB/s.

So it looks like each initiator is able to get only ~1Gb/s.

I'm very interesting is anybody get better results than me...
I saw few posts on citrix forum with same issue and similar results.
Google is silent about example of high bandwidth in iSCSI multipath configuration.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So this is a Citrix issue, then? I was going to try this under VMware, because multipath under VMware seems to work fine, so that'd just be a waste to go mess around with again (the VMware side is FINICKY but it works).
 

danypd69

Dabbler
Joined
Mar 20, 2013
Messages
18
I'm not using citrix but a standard debian linux distribution.
Initiator is open-iscsi (http://www.open-iscsi.org/) , I don't know if citrix uses the same initiator but if yes it could be an indication that something is wrong in open-iscsi.
anpilog can you check it ?
 

anpilog

Cadet
Joined
Apr 3, 2013
Messages
4
I'm not using citrix but a standard debian linux distribution.
Initiator is open-iscsi (http://www.open-iscsi.org/) , I don't know if citrix uses the same initiator but if yes it could be an indication that something is wrong in open-iscsi.
anpilog can you check it ?

I did test with XenServer, CentOS and Debian.
I got best performance with XenServer.

XenServer uses open-iscsi.
 

danypd69

Dabbler
Joined
Mar 20, 2013
Messages
18
I have installed another pc for testing; this time I used ubuntu server instead of proxmox.
I have been able to reach about 150mb/sec, it's not much but it's better than 115mb/sec
To obtain this result I have changed the rr_min_io to 1 in /etc/multipath.conf

The main difference between the proxmox host and this one is that the proxmox enables bridge on the network interfaces so it may be that the bridge causes some problem.
 

Got2GoLV

Dabbler
Joined
Jun 2, 2011
Messages
26
Read my post on page 1.
Post #8.

This is how MPIO works.
It will not saturate all links to their max capacity with one connection.
It is working as designed.

It is not an aggregation solution.

MPIO will use more bandwidth on all interfaces with multiple connections.
But each connection will still be limited to the equivalent of one of the interfaces.
 

danypd69

Dabbler
Joined
Mar 20, 2013
Messages
18
Ok, I have read your post.
If I understand it correctly I should have more than one connection for each interface to maximize the bandwidth; is it correct?
 

pdanders

Dabbler
Joined
Apr 9, 2013
Messages
17
I only have experience setting up MPIO on VMWare, but I can tell you how it works there (and it does work). Your best practice should be to have your VM Host (ESXi server for VMWare) setup with 2 (or more) dedicated NICs for iSCSI. Each of these NICs should be assigned an IP address in a separate subnet. On the FreeNAS side you do the same thing, you setup 2 (or more) NICS each with an IP address in a separate subnet (you should have a 1 to 1 relationship between NICs and subnets between an individual ESXi server and your FREENAS system). I don't even setup routing for these subnets since it's not needed in my setup. The end result is that each NIC on the ESXi server can only see one NIC on the FreeNAS system.... so VMWare sees this as two totally separate paths that can be load balanced over using round robin
 

Got2GoLV

Dabbler
Joined
Jun 2, 2011
Messages
26
Ok, I have read your post.
If I understand it correctly I should have more than one connection for each interface to maximize the bandwidth; is it correct?

Not exactly.
You need more than than one connection from initiator to target.
Not for each interface.

MPIO is not for bandwidth aggregation.
But, it can max out all the bandwidth links in cases where there are enough separate connections between targets and initiators so that each connection ends up cycling between links randomly enough.

It will only really use one link at a time, but it will cycle to the next link every X number of IOs (configuration setting).

In the case of ESX, this defaults to 1,000 IOs.

So, say you transfer a file from the target...
MPIO will start the transfer on one link, and after 1,000 IOs, it will switch to the next link, and so on.

If you start many different connections, you will be able to use more bandwidth than the equivalent of one of the Gig links.
But this would normally happen if, say, you have many different/busy servers accessing the target.

On single (or low numbers of connections), MPIO will not use much more, if at all, the equivalent bandwidth of one of the MPIO links.


I only have experience setting up MPIO on VMWare, but I can tell you how it works there (and it does work). Your best practice should be to have your VM Host (ESXi server for VMWare) setup with 2 (or more) dedicated NICs for iSCSI. Each of these NICs should be assigned an IP address in a separate subnet. On the FreeNAS side you do the same thing, you setup 2 (or more) NICS each with an IP address in a separate subnet (you should have a 1 to 1 relationship between NICs and subnets between an individual ESXi server and your FREENAS system). I don't even setup routing for these subnets since it's not needed in my setup. The end result is that each NIC on the ESXi server can only see one NIC on the FreeNAS system.... so VMWare sees this as two totally separate paths that can be load balanced over using round robin

Correct.
But this is still not link aggregation.
This is simply link redundancy.
MPIO is not bandwidth aggregation.
It works well for what it was designed for, and on busy configurations with lots of connections, the bandwidth of all links is maximized.
 

danypd69

Dabbler
Joined
Mar 20, 2013
Messages
18
Ok, my setup is exactly the same as described by pdanders and removing the linux bridge I've been able to reach about 150Mb/sec so I think it is working as intended.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can certainly twiddle various variables to get ESXi to use both links more aggressively. This link describes some of the simpler stuff you can do.
 

13year

Cadet
Joined
Dec 13, 2015
Messages
1
Posting this so the google will pick it up
My issue was on the FreeNas side
1. You have a portal with 192.168.7.1, 192.168.8.1
2. Create an initiator, target/ device extent, target/extent association, for each disk - if you do not do this the first ESX that get the disk will prevent the other from see it.
 
Status
Not open for further replies.
Top