SOLVED FreeNAS 9.3 FC Fibre Channel Target Mode DAS & SAN

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Special thanks to mav@ and jpaetzel ...who should definitely know they've helped me considerably to complete this. Thanks for your posts/replies on the forums.
The ESXi/FreeNAS lab is complete.

This can be used as either Direct Attached Storage in Point-to-Point or Arbitrated-loop (2/3 nodes)
Should also be able to use it in a SAN environment with a Fabric.

Here is a short guide for anyone who is looking to set this up.
----------------------------------------------------------------------------------------------------------------------------------------
FreeNAS Target side
freenas.png

1. Install the right version
2. Install your Qlogic FC HBAs (only supported brands are Qlogic to my knowledge.)
It's recommended by Qlogic to manual set port speed in your HBA BIOS
(yes, reboot and press the Alt+Q screen when prompted)
  • I'm using two Qlogic QLE2462 HBA cards (1 for each server) 4 Gbps max.
3. Check FC link - status after bootup
  • I have 2 ports on each of my cards, both cables plugged in (not required)
  • Check that Firmware / Driver loaded for card, shown by solid port status light after full bootup
  • I have the Qlogic-2462 [solid link orange=4Gbps] check your HBA manual for color codes
4. Add Tunables and Scripts

Go to the "System" section and enter these Tunables...
  • variable:ispfw_load ____value:YES_________type:loader___(start HBA firmware)?
  • variable:ctl_load ______value:YES _________type:loader___(start ctl service)?
  • variable:hint.isp.0.role__value:0 (zero)______type:loader___(target mode FC port 1)?
  • variable:hint.isp.1.role __value:0 (zero)______type:loader___(target mode FC port 2)?
  • variable:ctladm _______value:port -o on -t fc _type:loader___(bind the ports)?
TASKS.png

Go to the "Tasks" section and add in this Script.
  • Type:command________command:ctladm port -o on -t fc____When:Post Init
SCRIPT.png


5.
Enable iSCSI and then configure LUNs
enable iscsi service and create the following...

create portal (do not select an IP, select 0.0.0.0)
create initiator (ALL, ALL)
create target (select your only portal and your only initiator) give it a name...(doesn't quite matter what)
create extent (device will be a physical disk, file will be a file on zfs vol of your choice) Research these!
create associated target (chose any LUN # from the list, link the target and extent)

If creating File extent...
Choose "File" Select a pool, dataset or zvol from the drop down tree and tag on to the end
You must tag on a slash in the path and type in the name of your file-extent, to be created.
e.g. "Vol1/data/extents/CSV1"

If creating Device extent...
Choose "Device" and select a zvol (must be zvol - not a datastore)
BE SURE TO SELECT "Disable Physical Block Size Reporting"
[ Took me days to figure out why I could not move my VM's folders over to the new ESXi FC datastore... ]
[ They always failed halfway through and it was due to the block size of the disk. Unchecking this fixed it. _ ]

REBOOT!
now...sit back and relax - your Direct Attached Storage is setup as a target. The hard part is done.
---------------------------------------------------------------------------------------------------------------------------------------------
ESXi Hypervisor Initiator side
esxi-dedicated-server-icon.png

1.
Check that your FC card is installed and shows up
Configuration > Storage Adapters and click Rescan All to check it's availability by selecting your fibre card.
If you don't see your card, make sure you have installed the drivers for it on ESXi. (Total PIA if manual - google it)

2. Adding the storage to ESXi in vSphere ( VMFS vs. RAW SCSI )
You can now use your FC extent to create a VMFS Datastore (formatted with V.M.ware F.ile S.ystem)
As you know this lets you store multiple VM files and such, like a directly connected drive... but now with greater capacity & IO.
Just "Add Storage" as usual and use the fiber channel disk it should find during the scan and your done.

I was fine with this but I personally think, the less file systems involved between the server and the storage, the better. If I can eliminate one the performance should theoretically be somewhat less taxed.
(Your input and experience here is always welcome.)

If you want to use true block level access to the FC iSCSI LUN, you can use a pass-through feature VMware calls "RAW SCSI".
This way, you present that LUN you made to a single VM as a Raw SCSI hard disk.
To the best of my knowledge this is much like the connection of a SATA drive to the bus on a motherboard.
Unfortunately you can only present it to one VM using this method but it should allow for a more direct route to get more of a block-level access.

Using this method, I would rinse and repeat the steps above and dice up your FreeNAS ZVols to make LUNs for each additional VM as needed.

- Adding a RAW disk in ESXi (optional)
Edit Settings
for your VM and when you Add a new Hard Drive you will now see the Disk type "Raw Device Mappings" is no longer grayed out. Use this for your VM.
Remember it will be dedicated to only this VM Guest.

rdm2.jpg


Multi-Port HBAs
Research MPIO for a performance advantages and redundancy.
ESXi also has load-balancing for VMFS Datastores. I'm not entirely sure how advantageous this is but feel free to experiment. I think you must have extremely fast SSDs for it matter. Then just Create a ESXi Datastore and right-click, select "Properties..." and click "Manage Paths..."
  1. Change the "Path Selection" menu for Round-Robin to load balance with fail-over on both ports.
  2. Click "Change" button and "OK" out of everything.
Good luck!

Please reply if this helped you! I've been trying to get this working for almost 6 months! Thanks again to everyone on the forum.


fiber channel direct attached storage storage area network DAS SAN FC LUN
 
Last edited:

VSXi-13

Dabbler
Joined
Sep 17, 2014
Messages
15
What type of Fibre cable are you using to connect your ESXi host to your SAN? I believe the connector is LC/LC, but I don't know the rest. I'm looking at doing the exact same thing with my AsRock C2750 and my two SM A1SAi-2750F ESXi hosts.
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
What type of Fibre cable are you using to connect your ESXi host to your SAN? I believe the connector is LC/LC, but I don't know the rest. I'm looking at doing the exact same thing with my AsRock C2750 and my two SM A1SAi-2750F ESXi hosts.
Sweet, never seen a board like that. So-DIMM ECC haha.

May I ask it's overall purpose?

I personally would not use an atom proc for any hypervisor or virtual host.
My 7 y/o intel Q6700 would blow the lid off that thing and the 775 socket core 2 quads are dirt cheap these days... Check eBay. I'll post my specs below.
----------------------
Doesn't matter the cable... Whatever your cards uses.

More importantly, Make sure you have drivers and compatibility checked across the board.

The cable is like a freeway, the card is what you're driving...

What kind of cards do you have?

Also I would call this a DAS unit. SAN implies a "fabric" with an expensive fiber switch.
You should be able to pull this off without one, if all your cards have two ports.

---------------------------------

ESXi host
Core2quad Q6700 1066MHz bus 8MB cache
Cheapo MSI board
12GB 800MHz
dropped to 566MHz with all 4 installed (2x4G_2x2G) may be the board

FreeNAS
Amd 7750+ black edition
Cheapo Lenovo motherboard
2x2G 800MHz
2x Norco 5-bay hot-swap boxes
Z2 (4x 1TB 7.2K) jails, files
Stripe (2x 256G SSD) VMs
 
Last edited:

VSXi-13

Dabbler
Joined
Sep 17, 2014
Messages
15
Sweet, never seen a board like that. So-DIMM ECC haha.

May I ask it's overall purpose?

I personally would not use an atom proc for any hypervisor or virtual host.
My 5 y/o intel Q6700 would blow the lid off that thing and the 755 socket core 2 quads are dirt cheap these days... Check eBay. I'll post my specs below.
People see Atom and immediately write them off. The C2750 and C2758 series are on par with comparable Nehalem era Xeon processors. I saw another benchmark comparing the C2750 to the Xeon E3 1230V3 and in most of the benchmarks, the C2750 has about 1/2 the performance as the E3 1230V3. If you look at the performance per watt, the C2750 has about double the performance. This was a big part of me choosing it.

If you look at people doing whiteboxes / homelabs, there's a good community of people using the C2750 and 2758 series.

So as to it's purpose. I've got two hosts each with 32 GB of RAM serving an my homelab for VMware. As I have been spinning up more virtual machines, I'm running into issues with host contention for the network. I can bond my network connection to help alleviate some of this, but going with Fibre Channel HBA's may be a good option. Plus, it gives me hands on experience working with Fibre Channel.

----------------------
Doesn't matter the cable... Whatever your cards uses.

More importantly, Make sure you have drivers and compatibility checked across the board.

The cable is like a freeway, the card is what you're driving...

What kind of cards do you have?

Also I would call this a DAS unit. SAN implies a "fabric" with an expensive fiber switch.
You should be able to pull this off without one, if all your cards have two ports.

---------------------------------

ESXi host
Core2quad Q6700 1333MHz bus 8MB cache
Cheapo MSI board
12GB all 800MHz DIMMs
dropped to 566MHz with all 4 installed (2x4G_2x2G) may be the board

FreeNAS
Amd 7750+ black edition
Cheapo Lenovo motherboard
2x2G 800MHz final
2x Norco 5-bay hot-swap boxes
Z2 (4x 1TB 7.2K) jails, files
Stripe (2x 256G SSD) VMs

I don't have the cards yet, but I'm looking at the QLOGIC QLE2462, as they're dirt cheap. And yes this would be more of a direct attached storage, that is correct. I just wanted to have all my ducks in a row prior to pulling the trigger on buying them. I know that it has an LC/LC connector type, but there are different micron cables and I wasn't quite certain what I needed. The total cable length is likely only 3-5 feet total.

For FreeNAS, I'm using an Asrock C2750 board. This comes with a PCI-E 8x port. For my two ESXi hosts, A1SAi-2750F, I will need to purchase their riser card, and then I will be plugging the QLE2462 into them. From the HBA side, I'll be completely mimicking your setup, except the NAS/SAN/DAS (c2750) will be attaching to both hosts. I just wanted to know what cable it is I need to use.
 
Last edited:

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
I don't have the cards yet, but I'm looking at the QLOGIC QLE2462, as they're dirt cheap. And yes this would be more of a direct attached storage, that is correct. I just wanted to have all my ducks in a row prior to pulling the trigger on buying them. I know that it has an LC/LC connector type, but there are different micron cables and I wasn't quite certain what I needed. The total cable length is likely only 3-5 feet total.

For FreeNAS, I'm using an Asrock C2750 board. This comes with a PCI-E 8x port. For my two ESXi hosts, A1SAi-2750F, I will need to purchase their riser card, and then I will be plugging the QLE2462 into them. From the HBA side, I'll be completely mimicking your setup, except the NAS/SAN/DAS (c2750) will be attaching to both hosts. I just wanted to know what cable it is I need to use.
I've got FC cables, not sure the mn width but I'm sure google will tell you.

So we know the boards you're using but we don't know what you're using them for. I checked the spec and it appears as though your boards have intel atoms, is this right?
 

VSXi-13

Dabbler
Joined
Sep 17, 2014
Messages
15
I've got FC cables, not sure the mn width but I'm sure google will tell you.

So we know the boards you're using but we don't know what you're using them for. I checked the spec and it appears as though your boards have intel atoms, is this right?
Correct, both are Silvermont Intel Atoms. I found a Cisco document referencing the QLE2462 HBA's. For 4 gbps, I can use 150 meter 50/125 um fibre, or 70 meter 62.5/125 um fibre. Given that my distance is less than 2 meters, I think I'm good now.
 

VSXi-13

Dabbler
Joined
Sep 17, 2014
Messages
15
Everything is now ordered. Can't wait to get them put in and test out. I'll report back to see if following your methodology works for me!
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Everything is now ordered. Can't wait to get them put in and test out. I'll report back to see if following your methodology works for me!
Hey, how did everything go? Judging by the silence I'm - hoping - that it all worked great. Haha.
 

Chadd

Cadet
Joined
Jun 30, 2015
Messages
2
Hi Aran,

I created an account so I could thank you for your post. I'm retiring my old iSCSI NAS in favor for a shiny new FreeNAS box that will replace iSCSI over a dedicated gigabit Ethernet link to my ESXi box, with a 4Gb fiber link.

Is FreeNAS 9.3 BETA still the release I need to use to get this working? Or has this functionality made it into any of the Stable releases.

I have Friday off of work, and 2 Qlogic HBA's should be getting here on Thursday, so this will be a nice weekend project. I'm hoping for an improvement in I/O latency for my VM's
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
FreeNAS 9.3 is already in release status for half a year. But its development still continues, so you should better keep your system updated to the latest stable build, especially if you are going to use new functionality.
 

Chadd

Cadet
Joined
Jun 30, 2015
Messages
2
Success!

With 2 Qlogic QLE2462 cards in a point-to-point configuration, ESXi 6.0 is able to access the iSCSI volumes on my FreeNAS box. I've noticed much greater throughput and lower latency in storage calls from all of my VM's compared to the 1Gig Ethernet connjection it was previously using.

The problem I'm having now is that the hard drives in my FreeNAS box can't push enough bits to saturate the 2 fiber links :smile:.

Thank you so much for the tutorial.
 

cfgmgr

Cadet
Joined
Jan 9, 2015
Messages
9
Has anyone tested this with running a Brocade/Cisco FC switch in the mix? I'd like to avoid connecting all the nodes directly into the FreeNAS box as I have 3 node ESXi cluster.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I am testing several machines with Qlogic QLE2562 HBAs connected via Qlogic SANbox 3810 FC Switch. They are working, but next FreeNAS update should include bunch of fixes for FC drivers to make them work better.
 
Joined
Apr 26, 2015
Messages
320
Old post I'm responding to but wanted to add my input that your explanation totally works. There were a few glitches, can't recall what now, in what you posted and what I actually had to do but it all works.

In my case, I'm using this setup with BladeCenter chassis so any blade can see the storage. I build two FN boxes each with a dual port FC HBA so have two storage devices for each ESXi host.

The one problem I need to find an answer to is how to back up the zvol pool being used for storing all of the VM's to the second FN server.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
The one problem I need to find an answer to is how to back up the zvol pool being used for storing all of the VM's to the second FN server.
As first layer you should use periodic snapshots ("Storage->Periodic Snapshot Tasks" menu), that are useful for recovery by themselves, probably VMware-integrated ones for additional consistency ("Storage->VMware Snapshots" menu). Then those snapshots you can incrementally replicate to another FreeNAS host ("Storage->Replication Tasks" menu).
 
Joined
Apr 26, 2015
Messages
320
That's where reading so much is just confusing me at this point.

It isn't clear to me how it all comes together. It seems like the first thing you do is create backups on the same FN server using what ever method, snapshots, replication or full backups.

I read last night that it appears I could replicate the zvol which contains all of the vms onto the second FN server. However, from the stuff I'm reading, it also kind of seems I need to backup the zvol on the local machine first and replicate THAT over to the second FN?

I'm sure it'll all make sense once I get something to replicate. So far, I've had no luck finding the right details to make it happen. The docs are great but they aren't often very intuitive to the first time user.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It is all quite logical:
1) ZFS snapshot is nether copy nor backup. It is just a state of specified dataset/zvol at specified time, frozen for as long as needed. They occupy almost no space (until data overwritten) and they are very cheap.
2) Those ZFS snapshots can be synchronized with VMware snapshots (VMware snapshot created, ZFS snapshot created, VMware snapshot deleted, but remains existing inside ZFS snapshot). So in case of restore you can get clients consistent. And again it takes very little extra space in result, only some CPU time.
3) ZFS allows incremental replication from one snapshot to another. It is impossible to replicate dataset without snapshots, since it can change any moment. Snapshots on the other side are not going anywhere and can be easily and reliably converted into data stream on sender and reconstructed on receiver, and so replicated.
 
Joined
Apr 26, 2015
Messages
320
>3) ZFS allows incremental replication from one snapshot to another. It is impossible to replicate dataset without snapshots

This makes sense. I thought that maybe while replicating, it took a snapshot or something to get a fixed state.

>1) ZFS snapshot is nether copy nor backup. It is just a state of specified dataset/zvol at specified time, frozen for as
>long as needed. They occupy almost no space (until data overwritten) and they are very cheap.

Snapshots are usually once we have a full backup right? I mean, there needs to be a full backup at least once in order to take ongoing snapshots.

>2) Those ZFS snapshots can be synchronized with VMware snapshots (VMware snapshot created, ZFS snapshot created,
>VMware snapshot deleted, but remains existing inside ZFS snapshot). So in case of restore you can get clients consistent.
>And again it takes very little extra space in result, only some CPU time.

And this is the full backup part I *think?*. Meaning, I think the idea is to use this tool to get that first full backup, then the ZFS snapshots get combined with this if/when a backup is needed.

One thing I've not figured out (nor found enough info about) is why snapshots cannot be deleted.
For example, on nas01, I've created a periodic snapshot task for my esx zvol. I can see those in the Snapshots tab.

I don't have any VMware-Snapshot however. It is not clear how I would actually backup if my zvol got trashed.
And, I'm not clear on what I need to actually replicate onto nas02.

Coming along but some things are simply mystery until they start coming together as mentioned above :)
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
>1) ZFS snapshot is nether copy nor backup. It is just a state of specified dataset/zvol at specified time, frozen for as
>long as needed. They occupy almost no space (until data overwritten) and they are very cheap.

Snapshots are usually once we have a full backup right?
No. Snapshots are created on main (source) system and then replicated to backup. Snapshots -- first, replication/backup -- after.

I mean, there needs to be a full backup at least once in order to take ongoing snapshots.
No. Do you need a copy of person to make a photo camera shot of him? Do you need a copy of document to send it with a fax?

>2) Those ZFS snapshots can be synchronized with VMware snapshots (VMware snapshot created, ZFS snapshot created,
>VMware snapshot deleted, but remains existing inside ZFS snapshot). So in case of restore you can get clients consistent.
>And again it takes very little extra space in result, only some CPU time.

And this is the full backup part I *think?*.

Again, No. WMware also creates snapshots without copying the data, just remembering where is the original (snapshotted) data and where is newly modified ones. When VMware snapshot is deleted few seconds later, new data are merged back with the original.
 
Top