Fibre Channel Target - Anyone using?

Status
Not open for further replies.

KTrain

Dabbler
Joined
Dec 29, 2013
Messages
36
LACP and iSCSI don't mix, so you'll have to ditch that immediately. You'll also need to at the very least subnet the multiple GbE interfaces and set up MPIO.

Look at the FreeNAS documentation in a couple areas specifically:

7.4.1 LACP, MPIO, NFS, and ESXi - http://doc.freenas.org/9.3/freenas_network.html#lacp-mpio-nfs-and-esxi
10.5 iSCSI configuration - http://doc.freenas.org/9.3/freenas_sharing.html#block-iscsi

You also need a lot more RAM. The "1GB/TB" rule is more geared towards the home user or basic CIFS/NFS filesharing, when you start hosting VMs you need a lot more in order to be able to fulfill the random I/O that's generated there.

Good point about LACP compatibility, it's something I'll consider as I work through my own issues!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Good point about LACP compatibility, it's something I'll consider as I work through my own issues!

Well now you've got me curious, what issues? I looked at your profile and don't see a recent thread from you, only the one from way back in 2014 about your lessons-learned from running on an HP G5 server with a P400.
 

KTrain

Dabbler
Joined
Dec 29, 2013
Messages
36
Well now you've got me curious, what issues? I looked at your profile and don't see a recent thread from you, only the one from way back in 2014 about your lessons-learned from running on an HP G5 server with a P400.

I'm working on building a new system in my new lab. The head will be a DL380 G7 with 144GB of DDR3 ECC (I scored it for free) and the plan is to attach some external SAS trays to it. So far I've simply got the LACP configuration and basic system configuration done. A few days ago I started running some performance testing against a 10K SAS Mirror and the results were pretty poor. All that said, I feel like there's a lot of room for fine tuning, just have yet to identify where in the deployment I'm experience the biggest performance setbacks. I haven't implemented an L2ARC or any other disk based caching (though I have read about it), just trying to start with basics before I invest more money in to the setup.

My lab is two G6 blades in an HP C7000 running VMware 5.5 on local storage... looking to make the move to shared storage so I can simulate DRS and the like.

EDIT: Also, thanks for showing interest in my efforts. At some point I may open a thread on it (after I feel like I've done my due-diligence) as I certainly don't want to high-jack this one!
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
As long as you gave any P4xx-series RAID cards the boot, that system should be a pretty solid performer. Bear in mind that sync writes (ESXi NFS default) are still going to suck without an SLOG though.

Shoot me a PM or start a new thread if you want to go through this, always love tuning and working on this stuff.
 
Joined
Apr 26, 2015
Messages
320
No HoneyBadger!!! Let's keep this open so those of us interested can pick up and share some thoughts too :)

My reason for checking our FN was/is to get away from FC storage once and for all but man, it's hard to beat the performance that FC offers.
One thing I've yet to try however is a full 10GBe setup which you would think would blow away a 4GB FC network but, maybe not.

I've got a couple of the Qlogic QLE2462 adapters but I don't have a 4GB switch since I was using the BladeCenters FC switches connected directly into FC storage which was daisy chained to three other (JBOD) chassis. Not sure I can justify a switch just to do some quick testing.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
My reason for checking our FN was/is to get away from FC storage once and for all but man, it's hard to beat the performance that FC offers.
One thing I've yet to try however is a full 10GBe setup which you would think would blow away a 4GB FC network but, maybe not.

An MPIO 10Gbps setup should crush 4Gbps FC, all else being equal. But the lack of RAM in your box (assuming the 12GB number is still accurate) is definitely hampering performance as well. Based on your board and CPU setup I'm assuming you have 6x2GB in there now; go as high as you can afford, but even if you can't get any more than 6x2GB more to get to 24GB that will help immensely.

I've got a couple of the Qlogic QLE2462 adapters but I don't have a 4GB switch since I was using the BladeCenters FC switches connected directly into FC storage which was daisy chained to three other (JBOD) chassis. Not sure I can justify a switch just to do some quick testing.

If all you need is 4Gbps, some used HP StorageWorks gear (Brocade rebrands) should be available for super cheap. A quick search of eBay found me a 4/32B with sixteen licensed ports and SFPs for US$100. Search for "AG756A" or "447842-001" to find the exact part.
 
Joined
Apr 26, 2015
Messages
320
Wait now, if I recall, fibre channel can also be end to end without any hub/switch. I've got an adapter in both my esxi server and the FN server.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Wait now, if I recall, fibre channel can also be end to end without any hub/switch. I've got an adapter in both my esxi server and the FN server.

Correct, you can directly connect a pair of FC cards to each other. You can also chain >2 together in arbitrated loop mode (FC-AL) although then you've got a token-ring SAN which isn't exactly your optimal performance solution.
 
Joined
Apr 26, 2015
Messages
320
I think direct would be fine for testing. I would not be using the switch for anything other than connecting the two together, meaning, no fabric or anything needed.
 
Joined
Apr 26, 2015
Messages
320
It's all set up and working. I have a datastore on esxi using the iSCSI service over fibre channel.
I've connected both ports but have only configured one.

Any thoughts on how to test throughput on this?

Since I'm using ESXi, I don't have the cool backup features that ESX has but I do use a neat script setup called ghetto which lets me back up the vms while they are running. However, I'd like to find a way of doing that from the ESXi console, to the FN server over FC.
 
Status
Not open for further replies.
Top