Fibre Channel: Core vs Scale

kschreyack

Dabbler
Joined
Dec 26, 2015
Messages
12
Hello Everyone,

I wanted to bring up this topic, as I just started running Scale and have two Core NAS servers in production. My requirement is to run Fibre Channel as a backup to 10GiB Ethernet. For my 2 core systems, its a walk in the park setting up the fibre. In the Scale system, its based on Debian so what I'm familiar and feel at home doesn't apply.

So, where to begin? I see the hardware, but where to configure it. My tunables don't seem to apply in Scale, and I get an error setting them. It reports they don't exist!

Worst case for me, is that I reinstall with Core and go with what is proven to be rock solid and working with FreeBSD based Truenas server. I really do like Scale however, and would love to switch everything over to it ;)

Thanks for reading this and I look forward to any advice and discussion we have here. Have a great day!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Are you implying that your Fibre Channel is using IPoFC?
And then assigning an IP to the network device exposed by IPoFC?

Fibre Channel storage has not been available to SOHO users, (or hasn't been in a long time).


Back to the original request, no, I don't know the commands.

It's been a long time since I did this, (before 2013). And in my case, I vaguely recall building a new kernel that supported LSI MPT FC card with IP. That likely is not available in SCALE. But, someone correct me if I am wrong.
 

LarsR

Guru
Joined
Oct 23, 2020
Messages
719
As far as i know there's no fibre Channel Support in scale, only core.
 

kschreyack

Dabbler
Joined
Dec 26, 2015
Messages
12
Are you implying that your Fibre Channel is using IPoFC?
And then assigning an IP to the network device exposed by IPoFC?

Fibre Channel storage has not been available to SOHO users, (or hasn't been in a long time).


Back to the original request, no, I don't know the commands.

It's been a long time since I did this, (before 2013). And in my case, I vaguely recall building a new kernel that supported LSI MPT FC card with IP. That likely is not available in SCALE. But, someone correct me if I am wrong.
Hi Arwen,

I am using Fibre cable to a Brocade Silkworm 200 using fibre, no ethernet. Just for the record on the current release of Scale, the support for QLOGIC 25xx was working out of the gate. Kernel modules were loaded.. just not sure what's next in the configuration when trying to implement on Scale/Debian..
 

jhartbarger

Dabbler
Joined
Apr 3, 2023
Messages
13
When you say "backup to my 10GiB ethernet" please elaborate further, so if we know what your trying to accomplish exactly it would help.

Also I know all about iFCP, FCoE, and IPFC but anyone care to explain what exactly is IPoFC ?
 

kschreyack

Dabbler
Joined
Dec 26, 2015
Messages
12
When you say "backup to my 10GiB ethernet" please elaborate further, so if we know what your trying to accomplish exactly it would help.

Also I know all about iFCP, FCoE, and IPFC but anyone care to explain what exactly is IPoFC ?
Mutipath, serving as a backup configured in vmware. What I'm trying to accomplish is what everyone says is unsupported - iSCSI over Fibre on Truenas Core/Scale.

I'm now setting up to do this on Core which is a piece of cake to configure ;) I give up unfortunately, don't have time for it on Scale and no test units to play around with. Does ANYONE have this working on Scale? Not much for forum posts, just that its 'unsupported'.
 

jhartbarger

Dabbler
Joined
Apr 3, 2023
Messages
13
Mutipath, serving as a backup configured in vmware. What I'm trying to accomplish is what everyone says is unsupported - iSCSI over Fibre on Truenas Core/Scale.

I'm now setting up to do this on Core which is a piece of cake to configure ;) I give up unfortunately, don't have time for it on Scale and no test units to play around with. Does ANYONE have this working on Scale? Not much for forum posts, just that its 'unsupported'.
OK now I understand what your trying to accomplish, you want to use it as block storage path in VMware. I want to make sure you understand FC is not a compatible transport method for ISCSi packets. Just the same as Ethernet isn't a compatible transport method for FC traffic (FCoE was invented for that)

Unless VMware has implemented support for scst as HB pointed out above I think your out of luck at this time.
 

kschreyack

Dabbler
Joined
Dec 26, 2015
Messages
12
Yup, no doubt! I'm so glad for Core however. Just a few tunables to set and good to go ;) This has worked for me since FreeNAS 9..
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
When you say "backup to my 10GiB ethernet" please elaborate further, so if we know what your trying to accomplish exactly it would help.

Also I know all about iFCP, FCoE, and IPFC but anyone care to explain what exactly is IPoFC ?
I could not remember the abbreviation of IPFC, so I used IPoFC, (aka IP over FC).

Long time ago I used this as an in band management access to my Brocade SAN / FC switch from a Linux server. Worked fine. That was originally intended to be expanded, but I never got a round tuit.

With some FC SAN switches supporting 16Gbit/ps, or higher, (64Gbits/ps was released 2020), their is some usefulness in the concept of IPFC. (Other higher speed FC standards require 4 lanes...)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Unless VMware has implemented support for scst as HB pointed out above I think your out of luck at this time.
scst is what we use on SCALE server-side - VMware just sees an iSCSI target, and doesn't much care about what's serving it up (although it will negotiate VAAI capabilities)

I don't believe the necessary modules are built into the SCALE kernel for the FC HBAs themselves though.
 

kschreyack

Dabbler
Joined
Dec 26, 2015
Messages
12
On BSD/TruenasCore to turn up the ports:

Code:
ctladm port -o on -t fc


How is this done via scst on Debian? @HoneyBadger the modules were loaded in scale when I checked for a Qlogic 25xx card. Unfortunately I won't be able to test any longer as I've wiped and did a fresh install of Core, which resolved my problem since I already successfully run two of these ;) Getting scale to perform simularly is the issue here..
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Interesting, what's the relevant section of lspci -nnk look like? I may see if I can find a QLogic card somewhere around my hardware stash.

But much like FC on CORE, even if FC on SCALE is made to work, it's going to be a completely unsupported setup, likely require manual changes through the shell to adjust anything/create LUNs/etc, and potentially get overwritten on reboot/any UI-based change.
 

kschreyack

Dabbler
Joined
Dec 26, 2015
Messages
12
To my knowledge over the years I've never had any problems creating zvol and sharing them out via iscsi. The block shares were always available dual path (10GiB ethernet and 4GiB Fibre). After upgrades, no change in this working that I was aware of. These shares were solely used by Vmware ESX servers and reliably, as when I had to take down the 10GiB Dell switch I never lost connectivity as the paths configured in ESX flipped over automatically and instantly. Its been a good experience for me with Freenas 9 and all the iterations of Truenas Core ;)

As to running a hardware check via lspci, I really wish I could share that with you. Scale was blown away so I could get to work and start using this new build based on Core 13u5.3

Thanks so much for the discussion. I really do enjoy it and all things Truenas/IXsystems ;)
 
Top