It requires clustered file system to be running on top of that LUN. In case of VMware that is VMFS. What Proxmox can do about is a question to them.having one large zvolume avail (somehow, if it's safe from corruption) to all physical hosts
It requires clustered file system to be running on top of that LUN. In case of VMware that is VMFS. What Proxmox can do about is a question to them.having one large zvolume avail (somehow, if it's safe from corruption) to all physical hosts
Hello,
I am trying to understand if and how it's possible to use FreeNAS with this FC configuration to provide storage to multiple Proxmox PVE hosts (or any host for the sake of discussion) without worrying about data corruption.
My configuration is as follow:
FreeNAS Setup:
1. Dell PowerEdge 2950 Server, Qlogic QLE2462 (qty 2)
2. Dell MD1000
3. Two zvolumes, mapped to two separate iSCIS extents.
Host setup:
1. HP Server with Qlogic QLE2462.
I have connected server (A) with point-to-point, followed the instructions at the top of this post, and I have working FC storage to my host from FreeNAS. Great! Now I want to setup server (B), same configuration, provide storage via FreeNAS / FC. This is where I am confused.
Server A and Server B both see the same iSCSI target when I go into the QLogic bios settings. Also, I notice that to the QLogic HBA sees each extent as a LUN. I am unsure of how to provide LUN1 to Server (A) and LUN2 to Server (B) when it's all visible to both.
Doesn't that create a risk of file corruption, even if I only mount one LUN (i.e. LUN1) to one physical host (i.e. Server A.) ..? My end goal is to be able to setup separate zvolumes for each server, all of the same FreeNAS box.
Side question to that last statement: assuming there is a way to accomplish all this, would i be better carving out separate zvolumes for each physical host, or having one large zvolume avail (somehow, if it's safe from corruption) to all physical hosts?
Thank you very much in advance!
Actually now there is FreeNAS 9.10. But no, nothing has changed in this front. There can be multiple LUNs, but only one target. More functionality available in TrueNAS.1. I read you can only create 1 target as of 9.3. Has that changed in 9.3.1?
Yes, you can. In fact all iSCSI extents created are automatically shared via FC.2. Can you use the iSCSI service on the same system running FC? (I'm already assuming you can't point the two services at the same extent)
Yes, you may connect as many initiator as you like, but they should support some clustered file system (supported by VMware's VMFS).3. I also assume multiple initiators can connect (R/W) to the same LUN (example: multiple VMware ESXi hosts connected to a LUN for a datastore)
For all cards except 16Gbps FreeNAS automatically uploads firmware included into it, so version flashed into the card is not important.4. Did you need to update the QLogic HBAs to a certain firmware version like P20 for SAS HBAs?
The only benefit of 24xx is a price, otherwise it is mostly the lowest entry point (while theoretically 22xx and 23xx cards should also work). I would recommend more modern 8Gbps 25xx cards now. Newest 16Gbps 26xx cards should also work, but besides of their price they are somewhat new and experimental.I understand the recommended HBA to use it the QLogic 2462, is this still the case?
The only benefit of 24xx is a price, otherwise it is mostly the lowest entry point (while theoretically 22xx and 23xx cards should also work). I would recommend more modern 8Gbps 25xx cards now. Newest 16Gbps 26xx cards should also work, but besides of their price they are somewhat new and experimental.
16Gbps cards (at least 2670) should work in FC mode now in FreeNAS 9.10. FCoE mode is still not supported, even though it should not be very complicated.Hello mav@, I'm looking for recommended FC hardware too, but I'm not sure about the 26xx cards. There are the 2670 and 2690 models. Not sure what's the difference.
Do you wish to donate couple of those cards (~$1500 each) and sponsor few weeks/months of development? If not, then all questions to QLogic.And for christ sake, there's already 32Gbps FC cards, the 2700 series!![]()
At least I don't know other drivers. The market is very small, so alternatives are limited by definition.Finally only QLogic is recommended today on FreeNAS, right?
16Gbps cards (at least 2670) should work in FC mode now in FreeNAS 9.10. FCoE mode is still not supported, even though it should not be very complicated.
Do you wish to donate couple of those cards (~$1500 each) and sponsor few weeks/months of development? If not, then all questions to QLogic.
At least I don't know other drivers. The market is very small, so alternatives are limited by definition.
I have couple of 2670 in our lab and they are working in FC mode. About 2690 I can not say anything, they may require some more driver updates.So you'll be looking at the 2670 models and skipping 2690's.
[root@freenas] ~# ctladm portlist -i Port Online Frontend Name pp vp 0 YES tpc tpc 0 0 1 NO camsim camsim 0 0 naa.5000000177157b02 Target: naa.5000000177157b00 2 YES ioctl ioctl 0 0 3 YES camtgt isp0 0 0 naa.21000024ff06dfd9 Target: naa.20000024ff06dfd9 Initiator 0: naa.210000e08b1c434c Initiator 1: naa.500143800630fec2 4 YES camtgt isp1 0 0 naa.21000024ff06dfae Target: naa.20000024ff06dfae Initiator 0: naa.210000e08b1c6349 Initiator 1: naa.500110a00017000e 5 YES iscsi iscsi 257 1 iqn.2005-10.org.freenas.ctl:fc-test-nl,t,0x0101 Target: iqn.2005-10.org.freenas.ctl:fc-test-nl 6 YES iscsi iscsi 257 2 iqn.2005-10.org.freenas.ctl:tsm-ssd-pool,t,0x0101 Target: iqn.2005-10.org.freenas.ctl:tsm-ssd-pool
The question:
How to assign particular LUN to particular host?
What I need is LUN-host mapping.
Sorry to resurrect this thread - but I'm currently working on deploying this setup at home on the latest stable. (FreeNAS-9.10.1 (d989edd)
My current setup is very basic, a FreeNAS box with a QLE2562 and an ESXi 5.5 host with a QLE2562 as well. I followed the guide on page 2 of this thread.
I am running into an issue where LUNid's are renumbering after rebooting FreeNAS, ESXi doesn't like this at all. I've also noticed I don't even need to create extent/target associations, and when I do, they are ignored. As soon as I add a device OR file extent, the storage is immediately visible on the ESX side, without any association. I do not know if this is intended behavior or not.
I really don't need anything fancy here - I only intend on ever running two hosts and one FreeNAS system. I'm fine with all LUNs being presented to both targets, but I do need to control the LUNid's.
Hopefully an expert can shed a little more light on this subject for me.
-DC
Sep 27 17:23:17 freenas isp0: isp_target_start_ctio: [0x1243e4] data overflow by 524288 bytes Sep 27 17:23:17 freenas isp0: isp_target_start_ctio: [0x124474] data overflow by 524288 bytes Sep 27 17:23:17 freenas isp0: isp_target_start_ctio: [0x1244a4] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x124e34] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x124e64] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x1257c4] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x1257f4] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x125854] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x125884] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x126124] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x126184] data overflow by 524288 bytes Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x1261b4] data overflow by 524288 bytes Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11af34] data overflow by 524288 bytes Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b114] data overflow by 524288 bytes Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b174] data overflow by 524288 bytes Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b1a4] data overflow by 524288 bytes Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b1d4] data overflow by 524288 bytes Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121114] data overflow by 524288 bytes Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121144] data overflow by 524288 bytes Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121b04] data overflow by 524288 bytes Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121b64] data overflow by 524288 bytes Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x1228b4] data overflow by 524288 bytes Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x1228e4] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11b684] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11b6e4] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11b744] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11c074] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11c0a4] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11d094] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11e474] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11e4d4] data overflow by 524288 bytes Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11e504] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12a984] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12a9e4] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12aa14] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12bc74] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12bca4] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12bd04] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12c6c4] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12c724] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12e0d4] data overflow by 524288 bytes Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12e104] data overflow by 524288 bytes Sep 27 17:25:23 freenas isp0: isp_target_start_ctio: [0x1224c4] data overflow by 524288 bytes Sep 27 17:25:23 freenas isp0: isp_target_start_ctio: [0x1224f4] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x127414] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x127474] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1274a4] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x128554] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1285b4] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1285e4] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1297b4] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x12a144] data overflow by 524288 bytes Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x12a1a4] data overflow by 524288 bytes
016-09-27T21:23:57.117Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x4136833a8080) 0x8a, CmdSN 0xbc from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:23:57.121Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x413683972ec0) 0x8a, CmdSN 0xd1 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:23:57.260Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x413683860300) 0x8a, CmdSN 0xd6 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:23:57.260Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x41368405cbc0) 0x8a, CmdSN 0xf9 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:00.019Z cpu2:34302)World: 14302: VC opID hostd-b3af maps to vmkernel opID a4d908dd 2016-09-27T21:24:14.409Z cpu8:36432)WARNING: iodm: vmk_IodmEvent:193: vmhba0: FRAME DROP event has been observed 30 times in the last one minute. This suggests a problem with Fibre Channel link/switch!. 2016-09-27T21:24:14.410Z cpu1:33578)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6589cfc00000079083036d56cfd0cc88" state in doubt; requested fast path state update... 2016-09-27T21:24:14.410Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413685316800) 0x8a, CmdSN 0xe4 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:14.410Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413682312380) 0x8a, CmdSN 0x9d from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:14.410Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413683361c40) 0x8a, CmdSN 0x78 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:14.452Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413684d56ac0) 0x8a, CmdSN 0xc3 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:14.452Z cpu1:33578)NMP: nmp_ThrottleLogForDevice:2322: Cmd 0x8a (0x413684aeb380, 52422) to dev "naa.6589cfc00000079083036d56cfd0cc88" on path "vmhba0:C0:T0:L5" Failed: H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. Act:EVAL 2016-09-27T21:24:14.452Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413684aeb380) 0x8a, CmdSN 0x92 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:14.705Z cpu2:33578)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6589cfc00000079083036d56cfd0cc88" state in doubt; requested fast path state update... 2016-09-27T21:24:14.936Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413682abe340) 0x8a, CmdSN 0x68 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:14.936Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413680416480) 0x8a, CmdSN 0xce from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:14.936Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413682896bc0) 0x8a, CmdSN 0x99 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. 2016-09-27T21:24:15.360Z cpu1:33578)WARNING: ScsiDeviceIO: 1223: Device naa.6589cfc00000079083036d56cfd0cc88 performance has deteriorated. I/O latency increased from average value of 3623 microseconds to 153093 microseconds. 2016-09-27T21:24:20.019Z cpu1:33986)World: 14302: VC opID hostd-89ef maps to vmkernel opID d2d6bcd3
It's interesting. It's not fatal, and it persists for up to a couple hours. Seems to vary on which VM causes this, it's always related to I/O activity - such as extracting files from a large archive.
It looks interesting, but without more data I am not sure what side caused it. Diagnosing it requires much more input data, in particular, what commands were executed, what data were sent over the link, etc. In iSCSI case I would ask you to do tcpdump, but for FC there is no one.
Yes and yes.Does FreeNAS support:
1) 2x 8GBit FC ports (QLE 2562)?
2) Does FreeNAS allow a "LUN" to be mapped/presented to multiple hosts at the same time through the FC ports?