SOLVED FreeNAS 9.3 FC Fibre Channel Target Mode DAS & SAN

Joined
Aug 27, 2015
Messages
6
Hello, First of all a hearty thanks to aran kaspar for making me a fan of Freenas !!!
I am new to FC SAN & do not have the money to buy a storage box. So, was looking for a software solution for my HomeLab when I stumbled on this thread.
All set now with one of my systems serving as the server (FC Target ) & the other as the client (initiator). Point-to-point connectivity. aran kaspar's steps made this go smooth & easy.
Hardware :
- QLogic QLA2340 cards on both systems. ( I have 5 of these in total).
- LC-LC SAN Cables. ( I have 5 of these too).
- Dell EMC 200E Fibre Channel 16 Port 4 GB Switch (not configured yet).
Now, I an trying to add multiple LUNs / Targets ( not sure how this is supposed to work), but as soon as I make any additions to the settings, the LUN vanishes from the initiator.
This is what I want to achieve :
- Have 3 LUNS / Targets (as I said I'm new to FC SAN & need novice level guidance) for my ESXi 5.5 hosts (in cluster, currently using iSCSI from a seperate DIY storage box)
- Boot from SAN if possible to eliminate the usb pen drives i have got on the esxi hosts.
I don't worry about zoning stuff for the switch, as my friend is a SAN guy & he can help. But he never worked on software defined FC SAN solution.
Please guide me as to where I should start with... Thanks in advance...
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Multiple LUNs should just work the same as one. I don't know what could disappear in you case, but it should not. Multiple targets are impossible on FC now, so all extents you create will be exposed as LUNs of the same single FC target. Next release of TrueNAS will provide some more flexibility here, but FreeNAS won't get it, remaining at minimum. But make sure your FreeNAS is updated to the latest version to get at least bug fixes made to FC drivers.
 
Joined
Aug 27, 2015
Messages
6
Hello Mav@ thanks for the quick response.
I think u fixed my issue. I was creating multiple iSCSI targets :p
Will fix it tonight after i go back home from office.
Will create multiple extents & attach them to the same target using different LUN IDs.
Please gimme some light on how to get my switch into the SAN now. I have not used the switch yet.
& yes please advise how to update the FC drivers as you mentioned...
 
Joined
Aug 27, 2015
Messages
6
Hey thanks again Mav@.
I dunno if the switch has got any zoning configured on it already. I never used it before. Will reset it to factory defaults & start from scratch :)
Should be back home in the next 4hrs. Will update status after I check on these things...
 
Joined
Aug 27, 2015
Messages
6
Hey Mav@. All set. I had got the switch as refurbished, so had to Google a lot for getting the instructions for password reset.
Finally got the switch reset to factory defaults as it had a ton of 'fancy zoning' configured. All set successfully & running now.
Now, again one quick question.
How do i assign only one LUN to each of the HBAs in the systems? Currently all my systems can see all the LUNs, which I don't want (to avoid data corruption).
Is this at all possible with FreeNAS or does it require TrueNAS ?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
How do i assign only one LUN to each of the HBAs in the systems? Currently all my systems can see all the LUNs, which I don't want (to avoid data corruption).
Is this at all possible with FreeNAS or does it require TrueNAS ?
UI for this functionality present only in TrueNAS.
 
Joined
Aug 27, 2015
Messages
6
Ahhh thanks for confirming... Will stick to this for now... Anyways this is for lab purpose...
Just in case, what is the license price for TrueNAS ?
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Old post I'm responding to but wanted to add my input that your explanation totally works. There were a few glitches, can't recall what now, in what you posted and what I actually had to do but it all works.

In my case, I'm using this setup with BladeCenter chassis so any blade can see the storage. I build two FN boxes each with a dual port FC HBA so have two storage devices for each ESXi host.

The one problem I need to find an answer to is how to back up the zvol pool being used for storing all of the VM's to the second FN server.

What was your solution?
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
What type of Fibre cable are you using to connect your ESXi host to your SAN? I believe the connector is LC/LC, but I don't know the rest. I'm looking at doing the exact same thing with my AsRock C2750 and my two SM A1SAi-2750F ESXi hosts.
Not SAN, DAS ;)
I would be interested to see your setup. Do you have photos? :D
 

errmatt

Dabbler
Joined
Jun 3, 2014
Messages
16
I had this working great as a target for an esxi host until I updated to 9.3-STABLE-201511280648 the other day. Now I get this error message on the console at bootup, after all the drivers and firmware have loaded: "isp0 chan 0 LINK FAILED". I am using QLE2462 (ISP2432 based) cards, one in the esxi host and one in the FreeNAS box directly connected. It seems as though the card holds a link right up until the last ctladm commands get run, then it starts blinking orange on the FreeNAS box and pops up that error. My esxi host can no longer see the LUN presented.

I guess I knew this was a little experimental when I set it up, but it would be nice to get it back working again? Where can I start in troubleshooting? Anyone else have this break upon updating to the latest release?
 

errmatt

Dabbler
Joined
Jun 3, 2014
Messages
16
Well I'll be..... updated my esxi host to 6.0 Update 1 and this started working again. Before I upgraded my freenas box, I powered off all the VM's, put the esxi host in maintenance mode, and shut it down. Not sure what happened and why it all the sudden decided to stop initializing the FC card after I finished the FreeNAS updated and powered the esxi host back on, but once I updated it everything started working again and my data store that I have exported from a zvol over the FC cards to the esxi host is now accessible and working again. Yippee!
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
FC in FreeNAS was not seriously touched in last several months, so I would not expect new problems now. But I am actively working on it behind the scene, and plan major push with tons of bugs fixes sometimes in couple weeks.
 

fips

Dabbler
Joined
Apr 26, 2014
Messages
43
FC in FreeNAS was not seriously touched in last several months, so I would not expect new problems now. But I am actively working on it behind the scene, and plan major push with tons of bugs fixes sometimes in couple weeks.

привiт,
I recently got a lot of 4Gb FC stuff (HBA, Switches, Cables) and I would love to implement it to my infrastructure.
You said you are working on bug fixes, which means I should wait until new build?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I don't think there is much reason to wait. Existing code may also work fine for you. If not, nightly branch already includes all the changes I have up to the moment. They are just less tested, so this would be on you.
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
Experts,

I am also thinking of implementing this with my FreeNAS. Currently I have FreeNAS virtualized (via Physical RDMs) inside of ESXi (VMware) 6.0U1. I want to move away from this setup (though it has been reliable and stable for 2 years now), given I want multiple ESXi physical machines to use for various functionality (HA,DRS, 2Site, etc).

FreeNAS would be rebuilt as physical and I'm now heavily considering using Qlogic FC cards ( I have a QLE2460 and a QLE2462 in hand currently) to use instead of NAS/iSCSI for shared storage. If this functionality (FC Target mode) works with FreeNAS 9.3.X, I'm hoping that I can present a set of ZVOLs out to two identical Dell R210 II servers and format them as shared Datastores. I plan to use the QLE2462 (a 2 x 4GBit Port card) in FreeNAS, and for each of the Initiator machines (ESXi), run a single port QLE2460 card, each connected to one of the ports on the QLE2462.

I understand that FreeNAS doesn't have the concept of LUN Masking (From reply's to this post - at this point), which is mostly fine given generally I want to present the same set of ZVOL/LUNs to multiple ESXi machines, but I'm curious if this is supported with FreeNAS's implementation of FC-Target support? Will both machines be able to access the same LUNs concurrently without confusion on the initiator software running in Freenas? Will SCSI-Reservations (something required by VMware Datastores) be handled in this FreeNAS FC service properly? Will I still have the same VAAI functionality as iSCSI also would allow? If I move to a HP Microserver (N40L maybe), would the dual core CPU be extremely overwhelmed by the FC service overhead (should I look for a beefier FreeNAS solution?)

Thanks for the info here and look forward to development of this functionality in FreeNAS.

Letni
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Will both machines be able to access the same LUNs concurrently without confusion on the initiator software running in Freenas?
Yes. SCSI target in FreeNAS correctly supports multiple simultaneous initiators (up to 2048 initiators per iSCSI portal_group*target, and up to 256 initiators per FC port (artificial limit, can be increased)).

Will SCSI-Reservations (something required by VMware Datastores) be handled in this FreeNAS FC service properly?
Yes. FreeNAS supports all flavors of access sharing primitives: legacy SPC-2 reservations, newer SPC-3 reservations and latest VAAI ATS operations.

Will I still have the same VAAI functionality as iSCSI also would allow?
Yes. The same SCSI target implementation handles iSCSI and FC connections, so SCSI functionality is equal. You can even use them simultaneously in MPIO to access the same LUNs, if you want.

If I move to a HP Microserver (N40L maybe), would the dual core CPU be extremely overwhelmed by the FC service overhead (should I look for a beefier FreeNAS solution?)
FC target mode operation is not cheap on CPU. It requires about 2-3 interrupts to handle single SCSI command, comparing to only one interrupt in FC initiator mode. I am not ready to directly compare its CPU usage to iSCSI on weak hardware, since in last case it also depends on offload capabilities of used NICs, but I would guess that it should be close.
 
Top