Error connecting multiple ESX servers to the same iSCSI target

Status
Not open for further replies.

macmac1

Dabbler
Joined
Apr 9, 2014
Messages
17
I have iSCSI target exported at FreeNAS box v. 9.2.1.3.
Have it mounted at ESX server (5.5). The datastore is VMFS-formatted and stores some virtual machines.
Now I want to connect the same iSCSI target to another ESX server (exactly same version as the first one).
At second ESX box, I add the target at Storage Adapters section, do "Rescan All". But when I want to add it in "Add Storage...", ESX wants to format the drive: in "Select VMFS Mount Options" dialog, all options but "Format the disk" are greyed-out.
What is wrong???
Browsing the net I found arguing that one should not do it. That's definitely not true for at least the following reasons:
  • VMFS is designed to handle such configuration
  • I successfully use such multiple-initiators config with Synology box
  • I was already using FreeNAS boxes (v 8.X and 9.0) in this way. Now I want to set it up with new FreeNAS boxes and I got stuck.
So - what is wrong? Any configuration peculiarities that I've missed? Or FN bug?
Anybody did it with FN 9.2 ?
Please, help.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I haven't tried to do what you are doing, but claiming it works because a Synology box works is very flawed logic.

A 2 minute Google search turns up VMFS is a clustering file system, so it should work. But unless you can provide errors from your ESXi server log or something, I can't provide much help.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Sounds like you have a mistake in your iSCSI config on FN. Are you sure you don't have two extents instead of one?

I have this config working right now with ESXi 5.1. Though my hosts are clustered with vCenter.
 

macmac1

Dabbler
Joined
Apr 9, 2014
Messages
17
"If A works with B and C does not work with B then it is reasonable to look for problems at C" seems pretty good rule-of-thumb to me, but - forget Synology.
I don't like Synology for number of reasons (e.g. ancient Samba version) - that's why I'm trying to go back to FreeNAS.

Anyway - I used such configuration with FreeNAS in older version and ESX (4.X and 5.X) for maybe 2 years without problems.
In ESX logs, I cannot find anything interesting.
I've already tried to mount this iSCSI target from two different ESX servers with same result.
 

macmac1

Dabbler
Joined
Apr 9, 2014
Messages
17
Sounds like you have a mistake in your iSCSI config on FN. Are you sure you don't have two extents instead of one?

I have this config working right now with ESXi 5.1. Though my hosts are clustered with vCenter.

Yes, I suppose this is my config error.
I have 3 different extends and 3 different targets, but I don't think this should matter.

Maybe this has something to do with ESX that did format the partition initially? It was also v 5.5.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
How many iSCSI disks are you aiming for, 1 or 3? If 1 you only need 1 extent. Sounds like one host is seeing one extent and formatted it while the other host is seeing a different extent which is not formatted.
 

macmac1

Dabbler
Joined
Apr 9, 2014
Messages
17
For test purposes, I've removed 2 targets and FN box and left only one - still same problem.
This is my iSCSI config:

~# freenas-debug -i
+--------------------------------------------------------------------------------+
+ FreeNAS-9.2.1.3-RELEASE-x64 (dc0c46b) +
+--------------------------------------------------------------------------------+
Operating system type: FreeBSD
Operating system release: 9.2-RELEASE-p3
Operating system revision: 199506
Kernel version: FreeBSD 9.2-RELEASE-p3 #0 r262572+38751c8: Thu Mar 20 21:13:02 PDT 2014
root@build.ixsystems.com:/tank/home/jkh/9.2.1-BRANCH/freenas/os-base/amd64/tank/home/jkh/9.2.1-BRANCH/freenas/FreeBSD/src/sys/FREENAS.amd64
Hostname: x48svr61xfn1.xdsnet.pl
Name of kernel file booted: /boot/kernel/kernel


+--------------------------------------------------------------------------------+
+ /usr/local/etc/istgt/istgt.conf +
+--------------------------------------------------------------------------------+
[Global]
NodeBase "iqn.2011-03.pl.xdsnet.istgt1"
PidFile "/var/run/istgt.pid"
AuthFile "/usr/local/etc/istgt/auth.conf"
MediaDirectory /mnt
Timeout 30
NopInInterval 20
MaxR2T 32
DiscoveryAuthMethod None
MaxSessions 16
MaxConnections 8
FirstBurstLength 65536
MaxBurstLength 262144
MaxRecvDataSegmentLength 262144
MaxOutstandingR2T 16
DefaultTime2Wait 2
DefaultTime2Retain 60

[UnitControl]

[PortalGroup1]
Portal DA1 0.0.0.0:3260

[InitiatorGroup1]
InitiatorName "ALL"
Netmask ALL

[LogicalUnit1]
TargetName "target1"
TargetAlias "tgt1"
Mapping PortalGroup1 InitiatorGroup1
AuthMethod Auto
UseDigest Auto
ReadOnly No
UnitType Disk
UnitInquiry "FreeBSD" "iSCSI Disk" "0123" "002590f0daa600"
UnitOnline yes
BlockLength 512
QueueDepth 32
LUN0 Storage /dev/zvol/zfsVol1/zfsVolume1 auto
LUN0 Option Serial 002590f0daa6000
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358

macmac1

Dabbler
Joined
Apr 9, 2014
Messages
17
Thank you for advice - I can confirm it works indeed!

I found the following ESX KB article that seems refer to this problem: http://kb.vmware.com/selfservice/search.do?cmd=displayKC&externalId=1011387

Looks like VMware can incorrectly recognizes the volume as a snapshot.

I had to do the following at ESX server (through SSH):

# esxcli storage vmfs snapshot mount -u "52d7d8f8-66ebccbc-28da-003048db5f24"

After this volume gets mounted and no additional action in vSphere Client GUI was needed.
The get the UUID of the "snapshot" (I did not snapshots at this ESX at all):

# esxcli storage vmfs snapshot list
52d7d8f8-66ebccbc-28da-003048db5f24
Volume Name: x48svr61xfn1-Vol1
VMFS UUID: 52d7d8f8-66ebccbc-28da-003048db5f24
Can mount: true
Reason for un-mountability:
Can resignature: false
Reason for non-resignaturability: the volume is being actively used
Unresolved Extent Count: 1


Thanks once again for ultra-fast response.

Just one more thing:

Quoting VMware KB article:
This can be caused by replaced SAN hardware, firmware upgrades, SAN replication, DR tests, and some HBA firmware upgrades.

So it looks like VMware, not FreeNAS issue. But, this makes me wonder:
The volume in question has periodic snapshots and replication task configured (to yet another FN box).
This should have nothing to do with VMware, I think (at ESX I mount snapshot source, not destination).
But maybe it relates somehow: the FreeNAS configuration that worked for me in the past had no snapshots enabled.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
My zvol has snapshots and I have not experienced this issue. Though I cannot recall if I configured snapshots before or after setting up my cluster. I am also able to mount the replicated copy from my backup FN VM to a clean ESXi box without issue and launch VMs (tested last night). If you're doing snapshots/replication, make sure you follow through and test those as well. Keep in mind a VM running during the snapshot will act as though power was cut when you launch it from your backup (could be very bad depending on the VM).
 
Status
Not open for further replies.
Top