MPIO iSCSI Network Performance

Status
Not open for further replies.

dchmax

Cadet
Joined
Apr 1, 2013
Messages
7
I'm trying to determine what the best practice regarding network configuration for iSCSI. I would think multiple adapters would be assigned IP's in the our storage subnet (ex. 192.168.1.101, 102, 103) but the software does not allow this because of the subnet. Clients will be a mixture of Linux, ESXi 5.1, and Windows.

If an answer has been posted for this please point me in the right direction.

Thank you
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I would think multiple adapters would be assigned IP's in the our storage subnet (ex. 192.168.1.101, 102, 103) but the software does not allow this because of the subnet.

What? My FreeNAS box has 1 IPs right now on the same subnet(192.168.2.x with a SM of 255.255.255.0).

Can you elaborate alot more on your problem and include your hardware specs and software version and a little more about what you intend to use your FreeNAS machine for?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
OP has some misguided notion about how IP networking ought to work, and wants physical adapters to be bound to physical addresses so that he can put several on the same network. This is not a sane way to do IP networking but "It Works In Windows."

The correct ways to do it are through link aggregation, or the use of separate subnets.
 

dchmax

Cadet
Joined
Apr 1, 2013
Messages
7
I have a DELL PowerEdge R710 with 2 x quad core proc and 96 Ram. 2 Disk shelves are connected to a Perc 800 controller. The disks on each shelf are configured with RAID 50. I am running FreeNAS 8.3.1 from a 4gb flash drive inside the server. I have 4 network ports. I'd like to use 1 and 2 for mgmt, cifs, and the remaining 2 for iSCSI traffic.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Read up on how IP networking works. Based on your infrastructure and configuration that is something you will need to setup for yourself.

Now if you are getting an error or something isn't working how it should please provide the error messages or settings you used and what did and didn't work so we can assist with specific problems.
 

dchmax

Cadet
Joined
Apr 1, 2013
Messages
7
You are correct I'm just looking for best practice. Currently we are using EqualLogic storage arrays for iSCSI and each physical port has an IP on the same subnet.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi dchmax,

I can't say for sure that this is the best\most technically correct way to set things up, but here's what I do on my filer:

2 Intel "em" ports aggregated using LACP on 10.x.x.2/24 for management & CIFS.
1 Intel "em" port is on 10.x.y.29/30 connecting to a vmkernel port on 10.x.y.30/30 it's own vswitch
1 Intel "em" port is on 10.x.y.33/30 connecting to a vmkernel port on 10.x.y.34/30 it's own vswitch

-Will
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Will,

While that's technically fine, for the sake of clarity, let me describe it a bit differently... the average user has very little idea what a "/30" is.

You can set up as many "storage" subnets as you wish/need/etc. You could use "10.0.1.10/24" for one interface and "10.0.2.10/24" for the second. These are two distinct IP subnets (as the /30's were, but more obviously distinct). You then attach your iSCSI initiators to both networks. It works fine.

FreeBSD is a sophisticated, modern operating system, and the way traffic is routed is based on an abstracted networking subsystem. Multiple interfaces on a single subnet is an oxymoron and in order to make it "work" the way you want, there would need to be another selection layer in the network stack to handle traffic delivery, in addition to the route table lookup. You essentially need another layer to notice that a given destination has multiple possible links, and then search to see if any of those links are best, which adds a lot of complexity. Modern systems typically don't do this because it's slow, and there are standards such as LACP to do it, and there are 10GbE interfaces available, and on top of that, rational network designs like what I suggest above can be used to make multiple separate subnets, so there are multiple ways to "do it better."

I addressed this recently in

http://forums.freenas.org/archive/index.php/t-7620.html

The relevant bit that explains how it all works:

Ugh, there's the whole terminology thing, "bind to the IP address" again. I'm assuming what you mean is that when you bind to an IP address that's assigned to a specific card, traffic goes in/out that card.

To expand for the OP: You can certainly "bind to an IP address" in UNIX but in FreeBSD it's an abstraction that makes no guarantees as to the physical handling of the traffic. This is really useful in many environments for lots of stuff. For example, you can get network redundancy for a server using OSPF by putting your service addresses on a loopback interface, and then advertising that into your interior gateway protocol using OSPF or some other IGP. The address that all external traffic uses isn't configured on ANY ethernet interface. Userland applications "bind to the IP address" just like it was an alias (or the primary address) on an ethernet interface... anyways, point is, application level binding is pretty much not closely coupled to physical interfaces. FreeBSD has a rich set of network interfaces that it supports, including point to point links (PPP, SLIP, parallel port cable), ATM, etc., and the networking subsystem presents it all as an abstraction to the userland. So of course since a lot of the IP configuration is driven by what's defined for physical interfaces, this leads to operational and terminology confusion.

Basically, for the issue at hand, there are two key bits:

Input traffic from an Ethernet network to a host is controlled by ARP. The ARP subsystem publishes MAC addresses in response to requests from an ethernet network, and this happens infrequently (meaning far less than once a second). ARP controls packet ingress. The system ARP table maintains a list of learned and published addresses, and when an ARP packet is received, the system compares it to the system's interfaces and responds with the MAC address of the interface. Now this process works pretty much the way the OP would expect, but it can be subverted. For example, if I have a server with two interfaces, em0=192.168.1.1/24 and em1=192.168.1.2/24, and I set a client's ARP table with a manual entry for 192.168.1.1 pointing at the MAC address of server's em1, the traffic for 192.168.1.1 from client to server enters server on em1. And everything works fine. The UNIX networking layer doesn't think this odd or anything, even if you have a userland app that is bound to 192.168.1.1, it all works.

Output traffic to an ethernet network is controlled by the routing table, and the routing table is keyed by destination IP address. Basically when you do a "route get ip.ad.dr.ess" the system does a routing table lookup, similar to what happens when a packet is output. The source IP address isn't considered because IP routing happens based on destination. So as long as the routing code picks AN interface that's on the network in question, packets will be delivered, and that's pretty reasonable.

If you want to have multiple interfaces on a network, you should use link aggregation.

If you want to have multiple IP addresses on a network, you should define ifconfig aliases.

You can do other things, but then you're fighting the way it was designed, and then it may not work as you expect.
 

dchmax

Cadet
Joined
Apr 1, 2013
Messages
7
I've tried a few different configurations but cannot seem to get Windows 2008 R2 to show multiple paths. I've configured my iscsi portal with 2 ip's in 2 different subnets (192.168.1.1 and 192.168.2.1) I've even tried to create 2 portals each with a subnet but cannot assign 1 target to both portals. On the Windows 2008 R2 box I've enabled MPIO for ISCSI and rebooted. I've added both iscsi portal ip's in the discovery tab. I can connect on either subnet but I'm not able to see both at the same time. When i go into device details > MPIO, there is 1 path.

I'm not sure if its something I'm missing on the FreeNAS or Windows side to get MPIO working correctly.
 

pdanders

Dabbler
Joined
Apr 9, 2013
Messages
17
You need multiple NICs on your windows system also. Each nic needs to be in a different subnet (corresponding to the FreeNAS subnets you configured). That way each NIC can only "see" one FREENAS interface on it's local subnet
 

dchmax

Cadet
Joined
Apr 1, 2013
Messages
7
I will double check, but I'm 99% sure that is how its configured, still no MPIO.
 

dchmax

Cadet
Joined
Apr 1, 2013
Messages
7
Here is what I have setup on the FreeNAS box.

eth0 172.16.x.x
ISCSI1 192.168.1.1 /24
ISCSI2 192.168.2.1 /24

- iSCSI > Portals > Group ID 1
- I've added ISCSI1/2
- I have "target01" assigned the portal group ID 1 and initator group ID 1
- "target01" is associated with the file extent.

Windows Server
SAN1 192.168.1.10
SAN2 192.168.2.10

Now a thought just came to me, there is no routing between 192.168.1.x and 192.168.2.x. Is this going to be an issue? I'd think I would still see 2 paths.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, when you're learning what you need to do in order to make it work, routing is entirely evil distraction misleading off-in-the-weeds bad juju. From the FreeNAS side, "netstat -an" is your friend to see what connections have been established. You might first try making a plain iSCSI connection on each network to the FreeNAS to establish baseline connectivity works on both networks, one at a time.
 

dchmax

Cadet
Joined
Apr 1, 2013
Messages
7
jgreco, thanks for your help. I think I figured it out. I'm used to 2 paths automatically showing up in the iSCSI initiator. This time I created the first session and used advance properties to set to the first subnet. I then created a 2nd session manually on the 2nd subnet. Now to do some testing.
 

delecti

Cadet
Joined
Oct 16, 2012
Messages
4
I see alot of misleading dross in this post. IP networking, works in windows.... WTF. Don't post if you dont have real information!

I am using a similar hardware setup. This FN box is for 2 vSphere servers to share as a datastore. -1 NIC is for a weak home setup & not even an option unless you are using a 10gb card..

-I have 1 on board for administration interface & 4 intel GB interfaces.
-Right now they are in a LAGG w/ load balancing on.

Like yourself getting past the 1GB mark of saturation on our iSCSI link is the goal. Replicating the performance of a 4gb fiber channel link..
MPIO and multiple IPs I am told is the way to go to achieve this, however i still have not configured a box right to see this result.
--I am configuring another freenas box now on the same hardware so i will be testing a few things. multiple portal IPs and the like.
-= results will be posted here =-​
*if you had a switch you could VLAN those 2 networks & connect them vs routing them.​
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I see alot of misleading dross in this post. IP networking, works in windows.... WTF. Don't post if you dont have real information!
ROFL. That is HILARIOUS coming from a guy with a single digit post count.
 

Jeff Fanelli

Cadet
Joined
Sep 19, 2013
Messages
7
With respect, having two IP addresses on the same subnet, especially for the explicit purpose of iSCSI multi-path I/O is not only a valid configuration, but a very common practice in the the network attached storage world for iSCSI. You'll find many common examples of this in documentation from VMWare, NetApp, EMC, etc.

The main benefit here (as opposed to an IP on two different storage networks, which is also a fine configuration that you point out), is that you can ensure all iSCSI packets stay on the same subnet, and thus the same VLAN, even if one of the paths 'goes down' (during a switch upgrade or reboot, for example). Putting the interfaces on different subnets would potentially mean that packets would have to be 'routed' in the case of a I/O path failure . I've been using this model for YEARS with FreeNAS, and can confirm (by way of interface monitoring / graphing with Cacti) that I get a nice even distribution of I/O on both interfaces of FreeNAS on the same subnet, with my initiators (VMWare vSphere hosts) configured in a similar manner.

You're quite correct that having two IP's on the same subnet, would provide little (no?) benefit if all / most of the client (initiator) traffic was going to be routed to the FreeNAS, however any sane iSCSI implementation design would avoid this.

Frustratingly, FreeNAS 9.0 and earlier happily supported this common implementation, however in 9.1 a developer saw fit to quietly 'forbid' such configurations in the FreeNAS GUI, preventing two interfaces from having an IP address in the same subnet. An upgrade will leave the functional configuration in place, but you can't change it. This was quite a surprise to me, as it resulted in an upgrade to FreeNAS 9.1 resulting in a now broken configuration, requireing me to rearchitect my entire network (add second jumbo frame VLAN, support jumbo frame routing on my L4 switch, etc), as well as reconfigure all of my VMWare vSphere hosts with the new interface and network information. Quite a time consuming endeavor all in the name of accomodating someone's 'opinion' of networking best practices. :(

-jeff


Will,
...
You can set up as many "storage" subnets as you wish/need/etc. You could use "10.0.1.10/24" for one interface and "10.0.2.10/24" for the second. These are two distinct IP subnets (as the /30's were, but more obviously distinct). You then attach your iSCSI initiators to both networks. It works fine.
....

The relevant bit that explains how it all works:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
With respect, having two IP addresses on the same subnet, especially for the explicit purpose of iSCSI multi-path I/O is not only a valid configuration, but a very common practice in the the network attached storage world for iSCSI.

If that was true, then there wasn't a pressing need for 802.3ad. The fact that some vendors have made a feature work does not make it a good idea; basically if you look at what happens at layer 2, it becomes a royal mess, especially when you have an OS like FreeBSD that is abstracted so that it doesn't work with just a single type of networking hardware.

Almost all modern UNIX operating systems use abstracted networking stacks. Output traffic is handled through the route table, so the outbound load balancing that you are hoping to see by putting two interfaces on a single network ... just doesn't happen. The authors of the modern stacks know 802.3ad exists; it isn't 1989 anymore and RFC1122 stupidity is pointless. Making multiple interfaces work through a SECOND mechanism - in particular the "route it out the same interface" mechanism many people in your situation seem to expect - would require each packet to be looked up again in a different way, dramatically reducing throughput.

Multipath I/O on separate networks, preferably on separate switches and separate cards gives you a heck of a lot of resilience, a feature you just don't get if you try to stick it all on one wire.

The fact that the GUI used to allow you to do it and doesn't now is unfortunate; it never should have allowed it. It yields an unexpectedly broken networking configuration in a way that you wouldn't expect.

Apple sums the topic up very nicely.

I am not going to follow up further on this topic. This is one of those things that's a matter of "what can be made to work" vs "what is correct." I too can make stupid things work. Non-ECC memory can work in a server. RAID5 can recover a 10TB+ filesystem. etc. Doesn't make it correct, and doesn't mean it will work reliably.
 
J

jkh

Guest
That last post was inappropriate and used languages that will NOT be tolerated on these forums! I've sent a private warning to the user in question and also wanted to note the inappropriateness of the behavior publicly. It won't be tolerated. First offense gets a warning shot. Second offense, you're gone. No exceptions!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I was going to write a short response to that deleted msg along the lines of "sorry, multi interfaces has been ""discussed"" many times and beaten to death, I just hurried the discussion along to the conclusion because I don't have the time to beat this horse today."

This isn't a topic for "debate" or "argument", it is simply the way modern UNIX stacks work. I summarized a lot of information in a short message. Sorry you felt that was rude or whatever. Please do re-read my post in a different light ... that I was investing five minutes trying to explain a bunch of stuff to you.
 
Status
Not open for further replies.
Top