SOLVED iSCSI issues with ESXi 6.5

Status
Not open for further replies.
Joined
Nov 10, 2018
Messages
9
Hey all,

I just came over to FreeNAS from Xigmanas. I'm trying to setup iSCSI with MPIO & Jumbo Frames (which I did have working previously). The issue I'm encountering is that ESXi can see the paths, portals, etc..., but cannot see the (extent?) storage device I am attempting to present to it.

I can ping all ports iSCSI related on both machines (ESXi ---> FreeNAS, and FreeNAS ---> ESXi) and have confirmed that the MTU is set to 9k on both ends, on all iSCSI interfaces. Packets aren't being fragmented, and this is point-to-point, so there isn't a switch between them to mess with me. Any in-depth iSCSI settings ahve not been messed with, it hasn't been "tuned" (and I can't find info on that stuff anyway..)

ESXi version 6.5 U2 specifically.

Here is the hardware running FreeNAS 11.1 U6:
The WD Black drives are 3x Mirrored vdevs, 2 drives each (this is for ESXi). This would be presented as a zvol.
The HGST, though irrelevant to this issue, are mirrored (used for SMB/CIFS)

Here is a link to an Imgur gallery with some config info on the FreeNAS and iSCSI side.

Up until setting up iSCSI, setup and config was pretty painless and quick.

Any suggestions? I'm at quite a loss here, so some help would be really appreciated!

Thanks in advance.

Edit: some guides I followed:
Link 1
Link 2
Link 3
Link 4
Link 5
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Targets generally don't get naa.IDs. The devices that the target presents get the naa.987645873645 IDs. The targets will show up as base-name:target-name. In your case, iqn.2018-11.org.freenas.storage:naa.esxi-iscsi. Not a technical issue just an odd choice.

On your ESXi host run esxcli storage core device list and post that in [ code ] tags please.
Also take look at https://pubs.vmware.com/vsphere-6-0...e.vcli.examples.doc/cli_manage_files.5.5.html as this is essentaly what we are going to work from.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
+1 to what @HoneyBadger said. I've posted quite a number of times on this forum about the proper method for iSCSI setup. Just do some searching. Is the FreeNAS management interface also using the 192.168.0.0 subnet? If so you should NOT have the 192.168.0.10 portal IP on your iSCSI setup. Post the output of ifconfig in CODE tags here.
 
Joined
Nov 10, 2018
Messages
9
Targets generally don't get naa.IDs. The devices that the target presents get the naa.987645873645 IDs. The targets will show up as base-name:target-name. In your case, iqn.2018-11.org.freenas.storage:naa.esxi-iscsi. Not a technical issue just an odd choice.

On your ESXi host run esxcli storage core device list and post that in [ code ] tags please.
Also take look at https://pubs.vmware.com/vsphere-6-0/index.jsp?topic=/com.vmware.vcli.examples.doc/cli_manage_files.5.5.html as this is essentaly what we are going to work from.

Ok, on the naa ID I'll work on changing that. Reason for setting that manually was something I was trying, pretty much on the other distro, when setting up mpio, I didn't see an naa ID, so I thought that maybe it was something I had to set manually. No biggie lol.

Here is the output for esxcli storage core device list

Code:
t10.ATA_____SATA_SSD________________________________18072512003054______
   Display Name: Local ATA Disk (t10.ATA_____SATA_SSD________________________________18072512003054______)
   Has Settable Display Name: true
   Size: 114473
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/t10.ATA_____SATA_SSD________________________________18072512003054______
   Vendor: ATA	
   Model: SATA SSD		
   Revision: SBFM
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: false
   Is SSD: true
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: yes
   Attached Filters:
   VAAI Status: unknown
   Other UIDs: vml.01000000003138303732353132303033303534202020202020534154412053
   Is Shared Clusterwide: false
   Is Local SAS Device: false
   Is SAS: false
   Is USB: false
   Is Boot USB Device: false
   Is Boot Device: true
   Device Max Queue Depth: 1
   No of outstanding IOs with competing worlds: 1
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false


I'll work on reading through that link, thanks.

You've got separate subnets but you're using iSCSI port binding - this isn't recommended.

https://kb.vmware.com/s/article/2038869

+1 to what @HoneyBadger said. I've posted quite a number of times on this forum about the proper method for iSCSI setup. Just do some searching. Is the FreeNAS management interface also using the 192.168.0.0 subnet? If so you should NOT have the 192.168.0.10 portal IP on your iSCSI setup. Post the output of ifconfig in CODE tags here.

I was not aware that this was not reccommended, thank you. I'll get that fixed up and changed. The FreeNAS management is on 192.168.1.0/24, but this will be changed soon as I'm in the process of re-doing my network, so it won't even be in that subnet in a few days time. I'll try to search for some of your iSCSI setup posts as well.

Code:
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
		options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
		ether a0:36:9f:20:b6:64
		hwaddr a0:36:9f:20:b6:64
		inet 192.168.0.10 netmask 0xffffff00 broadcast 192.168.0.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
		options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
		ether a0:36:9f:20:b6:65
		hwaddr a0:36:9f:20:b6:65
		inet 192.168.10.10 netmask 0xffffff00 broadcast 192.168.10.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
igb2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
		options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
		ether a0:36:9f:20:b6:66
		hwaddr a0:36:9f:20:b6:66
		inet 192.168.11.10 netmask 0xffffff00 broadcast 192.168.11.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
igb3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
		options=6403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
		ether a0:36:9f:20:b6:67
		hwaddr a0:36:9f:20:b6:67
		inet 192.168.12.10 netmask 0xffffff00 broadcast 192.168.12.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
		ether 00:25:90:ad:30:7a
		hwaddr 00:25:90:ad:30:7a
		inet 192.168.1.250 netmask 0xffffff00 broadcast 192.168.1.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
em1: flags=8c02<BROADCAST,OACTIVE,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
		ether 00:25:90:ad:30:7b
		hwaddr 00:25:90:ad:30:7b
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect
		status: no carrier
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
		options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
		inet6 ::1 prefixlen 128
		inet6 fe80::1%lo0 prefixlen 64 scopeid 0x7
		inet 127.0.0.1 netmask 0xff000000
		nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
		groups: lo
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Here is the output for esxcli storage core device list
Well, that is definitely only showing your 120GB boot SSD.
Based on your screenshots, we can see freenas and establish a sendtarget session, get the list of targets and portals but no devices. Cat you provide cat /etc/ctl.conf? Please don't censor anything. There's nothing sensitive in that file. The extent serial number are just random anyway.
 
Joined
Nov 10, 2018
Messages
9
Well, that is definitely only showing your 120GB boot SSD.
Based on your screenshots, we can see freenas and establish a sendtarget session, get the list of targets and portals but no devices. Cat you provide cat /etc/ctl.conf? Please don't censor anything. There's nothing sensitive in that file. The extent serial number are just random anyway.

Here you go:

Code:

portal-group default {
}

portal-group pg1 {
		tag 0x0001
		discovery-filter portal-name
		discovery-auth-group no-authentication
		listen 192.168.0.10:3260
		listen 192.168.10.10:3260
		listen 192.168.11.10:3260
		listen 192.168.12.10:3260
		option ha_shared on
}

lun "datastore0" {
		ctl-lun 0
		path "/dev/zvol/esxi_iscsi/datastore_0"
		blocksize 4096
		option pblocksize 0
		serial "a0369f20b66400"
		device-id "iSCSI Disk	  a0369f20b66400				 "
		option vendor "FreeNAS"
		option product "iSCSI Disk"
		option revision "0123"
		option naa 0x6589cfc000000840d7ca229b3e0f1aa6
		option rpm 7200
}

target naa.esxi-iscsi {
		alias "naa.esxi-iscsi"
		portal-group pg1 no-authentication

		lun 1 "datastore0"
}

 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Network config looks ok on the FreeNAS side...you've got management on a separate subnet. What version of ESXi are you running? Trying changing your iSCSI extent on FreeNAS to use 512 byte blocks instead of 4096 and see if that works.

Edit: NM...I see the title...it says ESXi 6.5. Make the change to 512 byte and the device should show up.
 
Joined
Nov 10, 2018
Messages
9
Network config looks ok on the FreeNAS side...you've got management on a separate subnet. What version of ESXi are you running? Trying changing your iSCSI extent on FreeNAS to use 512 byte blocks instead of 4096 and see if that works.

Edit: NM...I see the title...it says ESXi 6.5. Make the change to 512 byte and the device should show up.

Changed the extent to use 512 byte logical block size, still no dice.

And I do have ESXi 6.5 U2, specifically, I'll update OP with that.

Edit: Ok... weird. I changed it to 512, saved the changes on FreeNAS, went to ESXi and Rescanned the HBA's and then rescanned devices, lead to no dice.

I changed the block size back to 4096 for giggles, saved everything, refreshed everything, and BAM it's there.... I don't trust it.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
(Edit: Your first post says you're able - did you use vmkping as below to force the originating interface? Try dropping the port bindings from your HBA as well.)

Are you able to ping each of the FreeNAS IPs from the associated ESXi NIC in the same subnet?

You can do this from an SSH shell on your ESXi host with (example below)

vmkping -I vmk1 192.168.0.10

Adjust the vmkN and target IPs as needed - you're trying to force each vmkN interface basically to try to ping what it's directly connected to.
 
Joined
Nov 10, 2018
Messages
9
(Edit: Your first post says you're able - did you use vmkping as below to force the originating interface? Try dropping the port bindings from your HBA as well.)

Are you able to ping each of the FreeNAS IPs from the associated ESXi NIC in the same subnet?

You can do this from an SSH shell on your ESXi host with (example below)

vmkping -I vmk1 192.168.0.10

Adjust the vmkN and target IPs as needed - you're trying to force each vmkN interface basically to try to ping what it's directly connected to.

Yup, everything works with vmkping. I'm going to reboot both hosts real quick and see if the device still shows up, I don't like that it magically appeared, seems fishy.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Network config looks ok on the FreeNAS side...you've got management on a separate subnet. What version of ESXi are you running? Trying changing your iSCSI extent on FreeNAS to use 512 byte blocks instead of 4096 and see if that works.

Edit: NM...I see the title...it says ESXi 6.5. Make the change to 512 byte and the device should show up.
You should still see the device but not be able to format it.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
If its still working, please post your current ctl.conf for comparison please. If it stops working again, grab ctladm devlist -v as well. That will list the "devices" that are actively configured in ctl not just the config file. I suspect the service/config was not being reloaded correctly.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Have you removed the port-binding on your HBA yet? From the VMware KB article:

If you configure port binding in this (your) configuration, you may experience these issues:
  • Rescan times take longer than usual.
  • Incorrect number of paths are seen per device.
  • Unable to see any storage from the storage device.
 
Joined
Nov 10, 2018
Messages
9
You should still see the device but not be able to format it.

Shouldn't ESXi be able to handle an AF device with 4096 blocksize..?

If its still working, please post your current ctl.conf for comparison please.

It isn't, upon reboot the device wasn't there.

Have you removed the port-binding on your HBA yet? From the VMware KB article:

Not yet, But I'm going to work on that now, I have to read the whole KB. I'll post when I've made the changes and what happens.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Have you removed the port-binding on your HBA yet? From the VMware KB article:
I would think that would affect the target visibility as well. Target discovery uses the same type of session as the actual transport.
Shouldn't ESXi be able to handle an AF device with 4096 blocksize..?
You would think... It was not until more recently (6.5?) that support for 4k drives was added. This was not a limit of ESXi but the version of I forget what partitioning utility from busybox that was used until they finally fixed it.. I spent over a week digging into that a few years ago.
It isn't, upon reboot the device wasn't there.
The other thing we can do is sniff some packets and see if FreeNAS is advertising the luns correctly and ESXi has an issue or if FreeNAS is not even reporting the LUNs from the target.
Gotta love troubleshooting.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Joined
Nov 10, 2018
Messages
9
Ok, so after setting the logical sector size on the extent to 512, and removing the port binding from the iSCSI settings, I was able to see and format the drive presented. Thanks to everyone for all the help, I really appreciate it. I now have my lab back!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Take a quick peek at your multipathing policy for the LUN - it should be round-robin - and also set the IOPS threshold to 1 for switching paths, since you've only got a single directly-connected host and want it to fan out across those paths as much as possible.

To create an SATP claimrule for this to handle new LUNs automatically, SSH into your host and enter:

esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V "FreeNAS" -M "iSCSI Disk" -P VMW_PSP_RR -O iops=1 -e "FreeNAS SATP rule"

Reboot your host, and you're set.
 
Joined
Nov 10, 2018
Messages
9
Awesome, thank you, I just ran it. I'll test it out and make sure it's working.

I've just recently started learning the esxcli commands, so those are pretty new to me.

I actually have another question about iSCSI, but it's more related to network hardware.. specifically NICs. Would you suggest starting another thread in a different part of the forum?
 
Status
Not open for further replies.
Top