SOLVED FreeNAS 9.3 FC Fibre Channel Target Mode DAS & SAN

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
mav@,

Thanks for the fast/awesome replies here.

Another Q: It sounds like FreeNAS FC/Initiator doesn't support LUN masking (only one portal limitation?), but does it allow presenting different LUNs down different physical ports (in the case I'm going to use a QLE2462, each port going to a different physical ESXi Server.. would be handy to do "Boot From SAN" in my case..

Thanks!

Letni
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It sounds like FreeNAS FC/Initiator doesn't support LUN masking (only one portal limitation?), but does it allow presenting different LUNs down different physical ports?
Technically, yes. SCSI target in FreeNAS supports reporting different set of LUNs on each physical port. There is no UI for it in FreeNAS (it is one of TrueNAS-only features), but probably there is nothing to stop you from configuring it by hands via set of `ctladm lunmap` commands.
 

fips

Dabbler
Joined
Apr 26, 2014
Messages
43
I configured my FreeNAS according to this guide.
I have a Raid-Z2 with 6 1TB SAS Disks but my bandwidth on one host with an Windows Client is about 80-100MB/s?
Shouldn't it be more over 4G FC??
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
fips,

Can you give us more details on the configuration:

1. What hardware you running for FreeNAS (how much memory, CPU, etc)?
2. What build of FreeNAS
3. How are you carving up the LUNs in zfs (zvol vs file extents)?
4. What FC cards are you using?
5. Are you using single-mode or multi-mode fiber cables (what are the link speeds on the lights on the FC cards?)
6. What FC drivers are you using in Windows for that specific card?

Thanks,

Letni
 

fips

Dabbler
Joined
Apr 26, 2014
Messages
43
fips,

Can you give us more details on the configuration:

1. What hardware you running for FreeNAS (how much memory, CPU, etc)?
2. What build of FreeNAS
3. How are you carving up the LUNs in zfs (zvol vs file extents)?
4. What FC cards are you using?
5. Are you using single-mode or multi-mode fiber cables (what are the link speeds on the lights on the FC cards?)
6. What FC drivers are you using in Windows for that specific card?

Thanks,

Letni

Sure, stupid that I didn't provide you already with that information.

1: CPU: Xeon(R) CPU E3-1230 v3 @ 3.30GHz RAM: 16GB ECC
2: FreeNAS-9.3-STABLE-201511280648
3: file extents
4: QLogic QLE 2460
5: Link Speed is 4G (color is orange)
6: Windows is a VM on an proxmox host, which use the same HBA as FreeNAS

I am setting up an Debian 8.2 as an VM to check if it has the same performance.

EDIT:
With Debian 8.2 and virtio drivers the same result.
 
Last edited:

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
Sure, stupid that I didn't provide you already with that information.

1: CPU: Xeon(R) CPU E3-1230 v3 @ 3.30GHz RAM: 16GB ECC
2: FreeNAS-9.3-STABLE-201511280648
3: file extents
4: QLogic QLE 2460
5: Link Speed is 4G (color is orange)
6: Windows is a VM on an proxmox host, which use the same HBA as FreeNAS

I am setting up an Debian 8.2 as an VM to check if it has the same performance.

EDIT:
With Debian 8.2 and virtio drivers the same result.

I just set up my first test with FC-FreeNAS setup, and had fantastic benchmark results.

Here is my Test Setup:

Free NAS Server:
- Dell T105
- 8 GB ECC DDR2
- 1.6 ghz Dual core AMD Athlon II u250 (so not that powerful)
- QLE2462 using latest QLOGIC bios (available on qlogic's website) version 3.29
- 1m LC-LC Duplex 50/125 Multimode 10Gb Fiber Patch Cable
- Latest FreeNAS build
- 3 x 1TB 7200 RPM SATA drives using onboard ATA Controller
- Drives configured into RAID-Z1, default settings (No GEIL Encryption - CPU doesn't support hardware AES)
- Test Setup with 4 x 100gb ZVol's, created with a mixture of "sparse" on/off and lz4 on/off

Initiator Server:
- Dell R210
- i3 Sandy Bridge CPU
- 32 GB Ram
- Windows 7 SP1
- QLE2460 using latest QLOGIC bios (available on qlogic's website) version 3.29
- Latest Windows 2008/2012 Driver from Qlogic's web-site
- 4 x 100 GB disks formatted NTFS with GTP Partition format
- ATTO Disk Benchmark

ATTO Results:
* See Screenshots below.. Run1.. On LZ4 Compressed ZVOL, RUN2, No ZFS Compression.
- Results beyond 32kb block size was getting 350-410 MByte/Second Read/Write speeds (according to benchmark) across all types
- the Non-compressed ZVOL (lz4 off) was slightly faster with smaller KB IOPS
- Sparse didn't matter at all on benchmarks.

So as you can see, depending on the workload and configuration (how large your read/write IO block size is), the throughput scales up to the link speed of 4 Gbit Fiber Channel. Most of the time most people aren't going to be in optimal situation (Desktops / VMware typically use 8KB to 128KB block size, depending). But latency, throughput, etc should be better over FC than over Ethernet.

I think next I'll be throwing ESXi 6 on this R210 and running a few Windows VMs off of "SAN" Datastores to see what performance looks like (RDM and through VMFS 5).. Stay tuned
 

Attachments

  • RUN1_LZ4_VOL_FC.JPG
    RUN1_LZ4_VOL_FC.JPG
    64.9 KB · Views: 531
  • RUN2_NOCOMP_VOL_FC.JPG
    RUN2_NOCOMP_VOL_FC.JPG
    64.1 KB · Views: 494
Last edited:

fips

Dabbler
Joined
Apr 26, 2014
Messages
43
Somehow I got a strange result....
 

Attachments

  • Screen Shot 2015-12-14 at 19.04.32.png
    Screen Shot 2015-12-14 at 19.04.32.png
    21.4 KB · Views: 513

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
fips,

Can you describe your test conditions... You mentioned VMware.. What type of Storage devices are configured to your Windows VM (where you ran ATTO? (RDM, VMFS3, VMFS5, etc)

letni
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
I ran a Win7 VM under ESXi6 attached single FC on the R210 as I described above.. I have the same numbers as if it were physical (IE VMFS/Virtualized host doesn't affect FC performance). Is it possible that you have some other undescribed factor (say encryption on, or are running out of memory) some how in FreeNAS? Have you made sure your FC HBA firmwares are at the latest code?
 

Attachments

  • RUN3_VMFS5_win7VM_LZ4_FC.png
    RUN3_VMFS5_win7VM_LZ4_FC.png
    136.3 KB · Views: 609

fips

Dabbler
Joined
Apr 26, 2014
Messages
43
Target ist the FreeNAS Storage I described 4 posts above.

Initiator is a Proxmox Host.
Well I use Proxmox, a debian distro to manage my VMs. It uses KVM for full virtualization.
It has a QLE2640 and 2x SSD for system.
VM (where a ran ATTO) is Windows 7, I set it up with emulated SATA disc.
I tried a second VM (a new debian 8.2) with a virtio disk but its the same result.

EDIT: I don't understand why I got 600MB/s as write speed?!?
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
Target ist the FreeNAS Storage I described 4 posts above.

Initiator is a Proxmox Host.
Well I use Proxmox, a debian distro to manage my VMs. It uses KVM for full virtualization.
It has a QLE2640 and 2x SSD for system.
VM (where a ran ATTO) is Windows 7, I set it up with emulated SATA disc.
I tried a second VM (a new debian 8.2) with a virtio disk but its the same result.

EDIT: I don't understand why I got 600MB/s as write speed?!?
I'm guessing that you have some crazy IO bottlenecking/caching with the KVM layer. I don't really know anything about KVM (I'm a VMware guy, unfortunately). Is it possible you can use something like a Raw Disk Mapping in KVM (to bypass their IO Emulation stack) or just use a different host all together running Windows as the physical OS?
 

severusx

Dabbler
Joined
Dec 17, 2015
Messages
17
First, I'd like to thank everyone for a clear and easy to understand guide. I am working on building a FreeNAS appliance that is connected to my ESXi cluster via FC (Brocade switch fabric). Setup is as follows:

Hardware:
SuperMicro 2028R-ACR24L server
2x Intel Xeon E5-2623V3
64 GB of RAM
24x Seagate 600GB 10k SAS drives
3x LSI/Avago 3008-IT SAS HBAs
1x QLogic 2562 FC HBA

FreeNAS 9.3.1 Stable - Current Release

During my POC, I had no issues using the settings in this thread to bring the target online on a desktop PC used for testing. I then transferred the same FC HBA over to my new server and repeated the process using the same FreeNAS installer and settings. However now I am unable to get the FC HBA to switch into target mode. I see the following errors during the post.

[root@orlfreenas1] ~# dmesg | grep isp
ispfw: registered firmware <isp_1040>
ispfw: registered firmware <isp_1040_it>
ispfw: registered firmware <isp_1080>
ispfw: registered firmware <isp_1080_it>
ispfw: registered firmware <isp_12160>
ispfw: registered firmware <isp_12160_it>
ispfw: registered firmware <isp_2100>
ispfw: registered firmware <isp_2200>
ispfw: registered firmware <isp_2300>
ispfw: registered firmware <isp_2322>
ispfw: registered firmware <isp_2400>
ispfw: registered firmware <isp_2400_multi>
ispfw: registered firmware <isp_2500>
ispfw: registered firmware <isp_2500_multi>
vgapci0: <VGA-compatible display> port 0x2000-0x207f mem 0xc6000000-0xc6ffffff,0xc7000000-0xc701ffff irq 16 at device 0.0 on pci8
isp0: <Qlogic ISP 2532 PCI FC-AL Adapter> port 0xf100-0xf1ff mem 0xfbe84000-0xfbe87fff,0xfbd00000-0xfbdfffff irq 50 at device 0.0 on pci130
isp0: Chan 0 setting role to 0x0
isp1: <Qlogic ISP 2532 PCI FC-AL Adapter> port 0xf000-0xf0ff mem 0xfbe80000-0xfbe83fff,0xfbc00000-0xfbcfffff irq 52 at device 0.1 on pci130
isp1: Chan 0 setting role to 0x0
(ctl0:isp0:0:256:0): Target Mode not enabled yet- lun enable deferred
(ctl1:isp0:0:-1:-1): Target Mode not enabled yet- lun enable deferred
ctlfe_onoffline: isp0 current WWNN 0x20000024ff346c70
ctlfe_onoffline: isp0 current WWPN 0x21000024ff346c70
isp0: Mailbox Command 'INIT FIRMWARE' failed (COMMAND PARAMETER ERROR)
ctlfe_onoffline: SIM isp0 (path id 13) target enable failed with status 0x4
(ctl2:isp1:0:256:0): Target Mode not enabled yet- lun enable deferred
(ctl3:isp1:0:-1:-1): Target Mode not enabled yet- lun enable deferred
ctlfe_onoffline: isp1 current WWNN 0x20000024ff346c71
ctlfe_onoffline: isp1 current WWPN 0x21000024ff346c71
isp1: Mailbox Command 'INIT FIRMWARE' failed (COMMAND PARAMETER ERROR)
ctlfe_onoffline: SIM isp1 (path id 14) target enable failed with status 0x4

I am unsure why FreeNAS doesn't seem to be able to load/enable the firmware for the FC HBA, so I am reaching out to the experts for help.
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
What firmware version is on your 2532?

What FreeNAS build were you using for your testing?
 

severusx

Dabbler
Joined
Dec 17, 2015
Messages
17
I started with 5.6 on the hba and in the process of trouble shooting upgraded it to 8. I still have another hba on 5.x if needed.

The testing was done on the most current version of 9.3.1 as of 3 weeks ago.
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
Actually after some quick Google Searches:
http://lists.freebsd.org/pipermail/freebsd-questions/2014-August/260418.html
https://bugs.freenas.org/issues/5886

It sounds like FreeBSD and 8GB Qlogic cards don't play together very well.. Those log error messages you posted above could be red-herrings and there could be another issue here (which I suspect given you tested this config a few weeks ago - with a slightly older FreeNAS build) and now the latest build as of this week, this functionality seems broken. I would download the FreeNAS 9.3.1 ISO and re-install (given the ISO is a few months old) and see if the same issue exist on the same hardware.
 

severusx

Dabbler
Joined
Dec 17, 2015
Messages
17
Yeah I had seen those and thought the same thing. I used the same iso I used in my POC and got the same broken result.
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
Did you happen to run software update since the install then?
 

Letni

Explorer
Joined
Jan 22, 2012
Messages
63
I guess luckily you can get 246X cards on eBay for cheap.. I would think even the most powerful machines/setups would have trouble saturating say 2 x 4GBit connections (assuming you set up multi-pathing software on the host that supports active-active paths) - though I don't know of any multipathing software that will do active-active with Generic Initiators - Usually that requires Vendor Specific plug-ins to Multi-pathing software (MPIO, Powerpath, etc)
 

severusx

Dabbler
Joined
Dec 17, 2015
Messages
17
Yeah I actually need the 8Gb throughput as this is connected to a 13 host cluster and it will saturate the link.

mav@ do you have any suggestions? Seems very odd that it would work with one set of hardware but not another with the same software version.
 
Top