Fibre Channel Target - Anyone using?

Status
Not open for further replies.
Joined
Apr 26, 2015
Messages
320
I've set up a FreeNAS 9.3 system onto the best hardware I could think of buying based on everything I've read. About the only thing not perfect seems to be that I'm using Kingston memory.

I've been testing moving some esx blades from fibre channel storage over to freenas using Ethernet and so far, the performance is awful.
I've been using FC for many years and Ethernet simply never matches what FC can do.
Granted, I've not done any kind of fine tuning but even if I did, I think FC would still kick Ethernet in the butt.

Which makes me wonder... instead of using iSCSI over Ethernet, I am wondering if anyone is using FC and if so, which adapter might be the best target, etc.

I don't see support for FC in freenas but perhaps I'm missing something since I'm so new to it.

Thanks for any info you can provide.
 
Joined
Apr 26, 2015
Messages
320
Actually, i tried using LACP which seemed to work ok until I installed a Plugin. I have no idea if the issue was related or not, too new to fn but I had to get rid of LACP in order to gain access to the server again.

Right now, I've got 4 1GB NICs on the server and using just the one is ok but certainly, cannot be compared to fc.

I've not done any tuning what so ever yet, would not even know where to start :)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
LACP and iSCSI don't mix, so you'll have to ditch that immediately. You'll also need to at the very least subnet the multiple GbE interfaces and set up MPIO.

Look at the FreeNAS documentation in a couple areas specifically:

7.4.1 LACP, MPIO, NFS, and ESXi - http://doc.freenas.org/9.3/freenas_network.html#lacp-mpio-nfs-and-esxi
10.5 iSCSI configuration - http://doc.freenas.org/9.3/freenas_sharing.html#block-iscsi

You also need a lot more RAM. The "1GB/TB" rule is more geared towards the home user or basic CIFS/NFS filesharing, when you start hosting VMs you need a lot more in order to be able to fulfill the random I/O that's generated there.
 
Joined
Apr 26, 2015
Messages
320
I'll take a look at those links, thanks for the leads.

My objective is to find a low cost way of moving from FC drives to SATA. My hope was to stick an HBA into the FN box then connect my BladeCenters by adding a connection from the FC switch the the FN box. Then, slowly transition from one to the other since I've already got everything set up and working.

I'm using 4GB FC links between servers and storage. I don't have a deep seated need to stick with FC as the connection method, I would just as easily install a 10GBe nic into the FN box and that might be a whole lot easier. However, I've got to use something which will match the switch/modules I have so I don't have to get into yet another cost there.

Now, having said all that, the location I'm doing this on is not a critical one, meaning, it's an off-site development environment that doesn't have to be 99.9% up but it should be reliable.

The stats for the FN box are below.
 
Last edited:
Joined
Apr 26, 2015
Messages
320
Unrelated maybe. I was looking through the logs and found the following in dmesg.
Wondering if this is vmware losing iSCSI connection when I had to switch from LACP back to individual NIC's when all stopped working?

WARNING: 10.0.1.201 (iqn.1998-01.com.vmware:localhost-23992702): no ping reply (NOP-Out) after 5 seconds; dropping connection
WARNING: 10.0.1.208 (iqn.1998-01.com.vmware:localhost-66629c35): no ping reply (NOP-Out) after 5 seconds; dropping connection
ifa_del_loopback_route: deletion failed
Freed UMA keg (udp_inpcb) was not empty (170 items). Lost 17 pages of memory.
Freed UMA keg (udpcb) was not empty (2016 items). Lost 12 pages of memory.
Freed UMA keg (tcptw) was not empty (800 items). Lost 16 pages of memory.
Freed UMA keg (tcp_inpcb) was not empty (240 items). Lost 24 pages of memory.
Freed UMA keg (tcpcb) was not empty (100 items). Lost 25 pages of memory.

I'm keeping an eye on this as I check into your suggestions.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Check your drivers on your ESXi host for the initator dropouts. For the "Free UMA not empty" check to see if it correlates with your jails/plugins restarting.

Re: FC, I totally understand - gotta use what you can connect to. If you're willing to go all the way to the command-line, have you considered a Solaris-based distro? FC target support via COMSTAR is way better there and it's still ZFS, but you're not going to get anywhere near the user-friendliness of FreeNAS.

NexentaStor might be an option but I think the EULA would require you to use a paid edition if it's in a business setting. NexentaCE would be a way to test functionality though at zero cost.
 
Joined
Apr 26, 2015
Messages
320
I'd like to stick with FN for a while, see what I can get going. Looks like a nice solution for a dev environment.

I checked ESXi storage path latency and don't see anything too bad. I've not seen any new dropouts in the logs either so it might from when I was in LACP mode then had to switch back.

As for jails, I didn't have any until I installed plex to see how plugins work but that is a whole other issue :)

https://forums.freenas.org/index.php?threads/no-plugins-list.30552/#post-197624
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
FreeNAS is excellent, so don't think I'm bagging on it for lack of FC target.

With the advent of inexpensive 10Gbps ethernet, the FC vs iSCSI vs NFS holy wars for VM storage have for the most part been put to rest. If the only option is gigabit though, iSCSI with MPIO is basically the only way to get acceptable performance, so see about configuring at the least a second path.
 
Joined
Apr 26, 2015
Messages
320
I've not looked yet but assume that FN supports 10 gig adapters so I think I'm going to go that route. Seems like the least expensive.
In the meantime, I'll check out MPIO.

I'll need to figure this out.

>make sure that the IP addresses on the interfaces are configured to be on separate subnets with non-overlapping netmasks or configure
>static routes to do point-to-point communication

My subnets runs across a firewall separating them. I'm not sure how I'd set this up. However, I do have 4 port bladecenter modules which makes me wonder if I could make some point to point connections.
 

Skynet3020

Dabbler
Joined
May 21, 2015
Messages
17
Hi at all

I'm new here and i want to use freenas as FC Target like openfiler
it is possible and has anyone a step to step guide ?

Thx
 
Joined
Apr 26, 2015
Messages
320
Sorry, can't help you at the moment as I'm using iSCSI. I might be trying to install a 10GBe FC adapter into the FN box at some point but I've not spent much time looking into it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Hi at all

I'm new here and i want to use freenas as FC Target like openfiler
it is possible and has anyone a step to step guide ?

Thx

Not really. FC is somewhat supported and somewhat unsupported at present. So the expectation that you know what you are doing is pretty much a given. It's one of those things that you can get working in 9.3, but it does require knowledge of how you would set it up properly on your own, for your exact hardware configuration.
 
Joined
Apr 26, 2015
Messages
320
I think someone pointed out that some FC adapters are supported no? In other words, if someone were to install an FC adapter, would FN recognize it and could a LUN (or more) be configured to be handed out? The rest of the networking aspect is not an issue for me since I have a full FC environment already.

Haven't done anything with FC using FN yet :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No clue. I own no FC hardware and have no experience trying to get it to work in FreeNAS.

There's a few "guides" that I'd call incomplete on how to do FC. That's why I said it is somewhat supported and somewhat unsupported. AFAIK it does require command line stuff because it's not fully supported in FreeNAS. What those steps are, how many it takes, and what you need to do that may not follow someone else's steps is totally up to you to figure out. :(
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
The adapters are limited to the Qlogic QLE246x and QLE256x series I believe via the isp driver.

FreeNAS recognizes them but starts them up as initiators by default. I believe you can make it work, but you have to configure all of the LUNs as if you're binding them to iSCSI, and then set the isp hint value (escaping me at the moment) to change the adapter into target mode.

There's no LUN masking or any kind of GUI configuration at all right now; as @cyberjock says it's very experimental and really shouldn't be used in any context except that.

If you want FC with your ZFS go Solaris+COMSTAR.
 
Joined
Apr 26, 2015
Messages
320
I wonder if a 10GB adapter will work. If it's the same series, it might.

UPDATE: None in the same series so would need to know what drivers are installed with FN or added on. I guess 4GB would be fine for testing too.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
I wonder if a 10GB adapter will work. If it's the same series, it might.

UPDATE: None in the same series so would need to know what drivers are installed with FN or added on. I guess 4GB would be fine for testing too.

If you're talking about something like the QLE8152 those are CNAs (Converged Network Adapters) and most likely won't work, unless they have an option to act in some manner of "legacy FC-only mode" through their BIOS and be recognized under the isp(4) driver.

Bandwidth wise, multipathed FC 4Gbps should be more than enough for production, since you can set a roundrobin policy to use all 8Gbps effectively.
 
Joined
Apr 26, 2015
Messages
320
Yes, I think 4GB would be just fine. When I get a chance to pop one in, I'll do it then get back on this thread.
 
Status
Not open for further replies.
Top