FC target support in FreeNAS 9.1.0

dcplaya

Cadet
Joined
Jul 1, 2014
Messages
8
Quite likely. You should search for multipath SCSI support in that Proxmox thing.

That is my next thing to do. And it is definitely the same drive. I formatted sde and then did fdisk -l again and sdf was formatted exact the same.
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Turned out Proxmox is just as easy. I had my HBA configured wrong. As soon as I went through all the steps again, Proxmox now sees my test FC store.

Strange thing though, Proxmox sees two 500GB drives. I only did 1 file extent but have 2 fibres going between the two. Is it actually seeing the same thing?
It's the same LUN.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
FC is great, but what about FCoE on intel 10g NIC's?
I suppose you should ask Intel about that. I don't have Intel 10g NIC, but at least on Chelsio FCoE is separate device, that require separate driver.
 

shang

Dabbler
Joined
Dec 10, 2014
Messages
10
Hi,
It works great.

one question:
On the SAN switch, it presents as scsi-fcp: target.
Is it possible to present LUNs with their own WWNs?
Therefore, instead of supporting one ESXi server, we can share the storage with multiple ESXi servers.

Thanks for your hard work and sharing the info.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
On the SAN switch, it presents as scsi-fcp: target.
Is it possible to present LUNs with their own WWNs?
Therefore, instead of supporting one ESXi server, we can share the storage with multiple ESXi servers.
Unfortunately my FC-fabric experience is close to zero, so I am not sure what is actually needed here. All FreeNAS LUNs already for purposes of XCOPY have their own 16-byte LUN IDs in NAA format, that, as I understand, in some cases are called WWN. FC ports also have their own 8-byte IDs flashed by vendor, known as WWPN. Also CTL now supports (though WebUI doesn't does not configure that yet) 8-byte node IDs, known as WWNN. So, could you please specify what exactly is missing here?
 

shang

Dabbler
Joined
Dec 10, 2014
Messages
10
Unfortunately my FC-fabric experience is close to zero, so I am not sure what is actually needed here. All FreeNAS LUNs already for purposes of XCOPY have their own 16-byte LUN IDs in NAA format, that, as I understand, in some cases are called WWN. FC ports also have their own 8-byte IDs flashed by vendor, known as WWPN. Also CTL now supports (though WebUI doesn't does not configure that yet) 8-byte node IDs, known as WWNN. So, could you please specify what exactly is missing here?

Thanks for the reply.
It could be my fault, but on the switch's FLOGI or FCNS database there is no LUN's WWNs entries.

It works great for one ESXi server. After connecting to the storage's HBA, the server sees all the LUNs.
Because LUNs are not visible on the switch, sharing the storage with multiple servers is not feasible.


Qlogic is the HBA on the storage, and have multiple LUNs.
Emulex is on the server side. It does see all the LUNs after connecting to the storage.
On the switch, the port is FX (auto select F or FL, and it is F now)
Find the HBA, but no LUN WWNs.
========================================================================================
FCNS output:
--------------------------------------------------------------------------
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
--------------------------------------------------------------------------
0x310000 N 21:01:00:1b:32:3b:7a:fe (Qlogic) scsi-fcp:target
0x360000 N 10:00:00:00:c9:97:5c:3d (Emulex) ipfc scsi-fcp:init

FLOGI
--------------------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
--------------------------------------------------------------------------------
fc1/3 3067 0x310000 21:01:00:1b:32:3b:7a:fe 20:01:00:1b:32:3b:7a:fe
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It works great for one ESXi server. After connecting to the storage's HBA, the server sees all the LUNs.
Because LUNs are not visible on the switch, sharing the storage with multiple servers is not feasible.

VMFS file system of ESXi is clustered. So I think there should not be a problem to connect multiple ESXi servers to the same storage without LUN filtering on FC switch.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Could you show example from some other storage with LUNs reported in those tables, as you want them? Because I am not sure that fabric login with its WWNN/WWPN has to do anything with separate LUNs, instead of lower level of targets and ports. isp driver used for Qlogic cards on FreeBSD actually supports some virtual ports, which, if configured, are seen for initiator as completely different targets behind the port, and I guess may be reported separately in tables you are asking about. But on my tests it was so unreliable in configuration, that I gave up on it so far.

From another side, CTL already supports exporting different set of LUNs through different Fibre Channel ports, so if you can dedicate separate ports for specific needs, then all you may want is WebUI update to support that functionality. I hope it happen at some point.
 

shang

Dabbler
Joined
Dec 10, 2014
Messages
10
Thanks for the quick reply.

You are right.
I think for HA to work, it does need to have a shared datastore. This solution does meet the needs.
However, without LUN filtering, we can not have windows server connecting to the storage. As windows will try to take over all the LUNs, and could cause a disaster by a simple click.

".....supports some virtual ports, which, if configured, are seen for initiator as completely different targets behind the port, and I guess may be reported separately in tables you are asking about. But on my tests it was so unreliable in configuration, that I gave up on it so far...."
"...reported separately...." is exactly what I mean. Is it possible for each LUNs to have its own WWPN? Maybe I am asking too much. :)

"....separate ports for specific needs...." physical ports could be a solution, but has its limitation. Different WWPN would be more flexible.

Thanks again for your hard work, willing to share the info with us and spending time answering my questions.
Sincerely

Shang
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
"...reported separately...." is exactly what I mean. Is it possible for each LUNs to have its own WWPN? Maybe I am asking too much. :)

There is a magic loader tunable hint.isp.X.vports. Being set to value above zero in creates respective number of virtual ports for specified physical port, each with own WWPN. I have no idea whether all cards support that, but at least my 8Gbps Qlogic seems do. At this point they all report the same set of LUNs, but that can be easily fixed once WebUI will be able to configure that for physical ports.

The problem I've seen with it were about taking ports up and down in random order. But if you just enable them once at boot and never touch after -- it may probably work enough for you to run some tests.
 

shang

Dabbler
Joined
Dec 10, 2014
Messages
10
Hi, this is the finding:

21 is the HBA, 22 and 23 are the new WWPNs.
To make it simple, I create two LUNs also.
The ideal result would be one LUN maps to one WWPN.
=====================================================
0x3101e4 NL 23:01:00:1b:32:3b:7a:fe (Qlogic) scsi-fcp:target
0x3101e8 NL 22:01:00:1b:32:3b:7a:fe (Qlogic) scsi-fcp:target
0x3101ef NL 21:01:00:1b:32:3b:7a:fe (Qlogic) scsi-fcp:target

Both LUNs are accessible from any of the WWPNs as you mentioned.
Without WebUI to map the LUNs to WWPNs, it is not very useful.
Thanks

Shang
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Hi Shang,

I think what you're looking for is more commonly called "LUN Masking" in FC terminology.

From my reading there doesn't appear to be any support in the FreeBSD CTL target for this. Sorry.

With that said, I haven't tried it recently under FreeNAS. I may give it a shot as I have a spare system and a supported QLE2462 HBA to play with.
 

shang

Dabbler
Joined
Dec 10, 2014
Messages
10
Hi Shang,

I think what you're looking for is more commonly called "LUN Masking" in FC terminology.

From my reading there doesn't appear to be any support in the FreeBSD CTL target for this. Sorry.

With that said, I haven't tried it recently under FreeNAS. I may give it a shot as I have a spare system and a supported QLE2462 HBA to play with.

Hi HoneyBadger,
Thanks for your inputs.

LUN mapping and masking is exactly what I want to accomplish
If it is not supported, then it is not supported.
Thanks

Shang
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Hi HoneyBadger,
Thanks for your inputs.

LUN mapping and masking is exactly what I want to accomplish
If it is not supported, then it is not supported.
Thanks

Shang

No problem. Sorry I couldn't give you better news.

You can control access via iSCSI portal/initiator group IDs by IP address, but if FC is a must then you'll have to look at COMSTAR under Solaris or SCST under Linux until these features are implemented into CTL.
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Hi, FC is great, but what about FCoE on intel 10g NIC's ?
Tranced,
I guess if you have drives that are actually capable of handling the extra money's-worth / speed that 10Gb delivers, go for it. But I hear $$$ signs
I thought the same could be said here... why use a 10Gb FCoE CNA instead of a 16Gb Fiber HBA or 20Gb SAS card?

I'm curious about the benefits of FCoE. Will you share?
 

Skynet3020

Dabbler
Joined
May 21, 2015
Messages
17
Special thanks to mav@ and cyberjock who may or may not know it but have helped me immensely to complete this by having their posts/replies on the forums.
My ESXi/FreeNAS sandbox is complete. Thank you guys!!!!!

Here is my exact config for anyone who is looking to set this up...
----------------------------------------------------------------------------------------------------------------------------------------
FreeNAS Target side
freenas.png

1.
Install the right version
  • FreeNAS 9.3 BETA (as of this writing) --- install!
2. Install FC HBAs - configure manual speed in HBA BIOS
  • I'm using a Qlogic QLE2462 HBA card
  • 4Gbps speed Manual set in HBA BIOS (recommended by Qlogic)
3. Check FC link - status after bootup
  • I have 2 fiber FC cables for the 2 ports on each of my HBA cards
  • Check that Firmware / Driver loaded for card, shown by solid port status light after bootup
  • I have the Qlogic-2462 [solid link orange=4Gbps] check your HBA manual for color code diagnostics
4. Add Tunables in "System" section
  • variable:ispfw_load ____value:YES_________type:loader___(start HBA firmware)?
  • variable:ctl_load ______value:YES _________type:loader___(start ctl service)?
  • variable:hint.isp.0.role__value:0 (zero)______type:loader___(target mode FC port 1)?
  • variable:hint.isp.1.role __value:0 (zero)______type:loader___(target mode FC port 2)?
  • variable:ctladm _______value:port -o on -t fc _type:loader___(bind the ports)?
TASKS.png

Add Script in "Tasks" section​
  • Type:command________command:ctladm port -o on -t fc____When:Post Init
SCRIPT.png


5.
Enable iSCSI and then configure LUNs
enable iscsi service and create the following...

create portal (do not select an IP, select either 0.0.0.0 or the dashes) whichever allows you to advance
create initiator (ALL, ALL)
create target (select your only portal and your only initiator) give it a name...(doesn't quite matter what)
create extent (device will be a physical disk, file will be a file on zfs vol of your choice) Research these!
create associated target (chose any LUN # from the list, link the target and extent)

If creating File extent...
Choose "File" Select a pool, dataset or zvol from the drop down tree and tag on to the end
You must tag on a slash in the path and type in the name of your file-extent, to be created.
e.g. "Vol1/data/extents/ESXi"

If creating Device extent...
Choose "Device" and select your zvol (must be zvol - not a datastore)
! ! ! BE SURE TO SELECT "Disable Physical Block Size Reporting" ! ! !
[ Took me days to figure out why I could not move my VM's folders over to the new ESXi FC datastore... ]
[ They always failed halfway through and it was due to the block size of the disk. Checking this fixed it. _ ]

REBOOT! REBOOT! REBOOT! REBOOT! REBOOT! REBOOT! REBOOT! REBOOT! 1 time.
now...sit back and stretch your sack - your Direct Attached Storage is setup as a target
---------------------------------------------------------------------------------------------------------------------------------------------

ESXi Hypervisor Initiator side
esxi-dedicated-server-icon.png

1.
Add to ESXi in vSphere
Configuration > Storage Adapters and click Rescan All to check it's availability by selecting your fiber card.
If you don't see your card, make sure you have installed the drivers for it on ESXi. (Total PIA if manual)

2. Datastores
I was able to use my LUN when created using a device extent.
But I had trouble creating a datastore with a file extent... after some futzing I somehow fixed that.
But before I used the file ext as a datastore I ended up using file ext. this way...

VM Guest attach
Presented the drive by adding a new virtual "Hard Drive" on one of my guest VMs machine's settings...
as a "RAW DISK"
Gets presented to the VM Guest just like another hard drive but the speeds are blazing of course.

It will pool capacity and double my speed
You can add multiple physical disks to a vSphere datastore but while it doubles in capacity as expected... I found that it does not actually use them both simultaneously for advantageous throughput.

MULTI-PORT HBAs
If have hundreds or thousands of file requests per minute...
You can configure load-balancing in ESXi... (this is for very intensive access count and fail-over purposes)
  1. Create datastore (like below) and right-click, select "Properties..." and click "Manage Paths..."
  2. Change the "Path Selection" menu for Round-Robin to load balance with fail-over on both ports.
  3. Click "Change" button and "OK" out of everything.

OR

If you have multi-port HBAs and want a performance advantage...
Setup MPIO for a performance and redundancy.
You could make an iSCSI target for each disk "Device" and present them to Server 2012 or whatever you use on the ESXi side.
Read about the rest!!! Good luck!

Please reply if this helped you! I've been trying to get this working for almost 6 months! Thanks again to everyone on the forum.

Many Thx for this guide working very well :smile:
 

Skynet3020

Dabbler
Joined
May 21, 2015
Messages
17
i got a Little Problem, after a update and reboot the FreeNAS System.

The extents and target/extents was missing,
after i entered this again, the volume was showing in esx, but all data are lost and i must reformatting it.
its a bit strange, i do now a new test...

i use the last nighlies Version, was this wrong ? or mus i use only the stable version ?
 
Top