Can't create new datastore on esx 8 with iscsi ZVol

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Is there any reason when new datastore creation on a simple 100GB Zvol fail with the following on ESX 8. Zvol block at pool level is 8K, while block level for the extent is the default 512. Any comment would be appreciated

1707854450188.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I did not enable (not check). Should I have its check? for ESX 8?
Yes, you can edit the Extent and check that off without having to recreate it.

Did you create the Extent manually or through the Wizard?
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
I try both check and uncheck with no difference ;( I didn't use the wizard
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The vmkernel.log from ESXi might shed some insight as to why it can't format the partitions.

I'm able to use both physical block size reporting both enabled and disabled on my CORE test system (although system topologies may differ) so I'm unsure what's preventing yours from working.

Can you share the output of esxcli storage core device capacity list from your VMware machine? Specifically looking at what it reports the physical and logical block sizes of your TrueNAS LUNs to be.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
let me check the log, but esxcli output wise, see below
[root@esx03:~] esxcli storage core device capacity list
Device Physical Blocksize Logical Blocksize Logical Block Count Size Format Type
------------------------------------------------------------------------ ------------------ ----------------- ------------------- ----------- -----------
t10.NVMe____Samsung_SSD_990_PRO_with_Heatsink_4TB___D0D440314A382500 4096 512 7814037168 3815447 MiB 512e
mpx.vmhba32:C0:T0:L0 512 512 0 0 MiB 512n
naa.6589cfc00000016819a5371aedeadc58 16384 512 209715232 102400 MiB Unknown
t10.ATA_____SATA_SSD________________________________21110524000180______ 512 512 468862128 228936 MiB 512n
t10.ATA_____Samsung_SSD_870_EVO_500GB_______________S5Y4R020A07852002556 512 512 976773168 476940 MiB 512n
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Here is part of the vmkernel.log

2024-02-13T21:11:16.104Z Wa(180) vmkwarning: cpu8:2097582)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo:190: Could not get page 83 INQUIRY data for path "vmhba64:C0:T1:L0" - Transient storage condition, suggest retry (195887294)
2024-02-13T21:11:16.114Z Wa(180) vmkwarning: cpu0:2097834)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.6589cfc00000016819a5371aedeadc58" state in doubt; requested fast path state update...
2024-02-13T21:11:17.022Z In(182) vmkernel: cpu1:2098177)iscsi_vmk: iscsivmk_ConnNetRegister:1895: socket 0x4313f9382340 network resource pool netsched.pools.persist.iscsi associated
2024-02-13T21:11:17.022Z In(182) vmkernel: cpu1:2098177)iscsi_vmk: iscsivmk_ConnNetRegister:1922: socket 0x4313f9382340 network tracker id 261101650 tracker.iSCSI.192.168.50.2 associated
2024-02-13T21:11:17.103Z Wa(180) vmkwarning: cpu0:2097834)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.6589cfc00000016819a5371aedeadc58" state in doubt; requested fast path state update...
2024-02-13T21:11:17.103Z Wa(180) vmkwarning: cpu8:2097582)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo:190: Could not get page 83 INQUIRY data for path "vmhba64:C0:T1:L0" - Transient storage condition, suggest retry (195887294)
2024-02-13T21:11:17.275Z Wa(180) vmkwarning: cpu1:2098177)WARNING: iscsi_vmk: iscsivmk_StartConnection:917: vmhba64:CH:0 T:1 CN:0: iSCSI connection is being marked "ONLINE"
2024-02-13T21:11:17.275Z Wa(180) vmkwarning: cpu1:2098177)WARNING: iscsi_vmk: iscsivmk_StartConnection:920: Sess [ISID: 00023d000001 TARGET: iqn.2005-10.org.freenas.ctl:test TPGT: 1 TSIH: 0]
2024-02-13T21:11:17.275Z Wa(180) vmkwarning: cpu1:2098177)WARNING: iscsi_vmk: iscsivmk_StartConnection:921: Conn [CID: 0 L: 192.168.50.13:29923 R: 192.168.50.2:3260]
2024-02-13T21:11:22.314Z Wa(180) vmkwarning: cpu12:2098134)WARNING: iscsi_vmk: iscsivmk_ConnReceiveAtomic:478: vmhba64:CH:0 T:1 CN:0: Failed to receive data: Connection reset by peer
2024-02-13T21:11:22.314Z Wa(180) vmkwarning: cpu12:2098134)WARNING: iscsi_vmk: iscsivmk_ConnReceiveAtomic:484: Sess [ISID: 00023d000001 TARGET: iqn.2005-10.org.freenas.ctl:test TPGT: 1 TSIH: 0]
2024-02-13T21:11:22.314Z Wa(180) vmkwarning: cpu12:2098134)WARNING: iscsi_vmk: iscsivmk_ConnReceiveAtomic:485: Conn [CID: 0 L: 192.168.50.13:29923 R: 192.168.50.2:3260]
2024-02-13T21:11:22.314Z In(182) vmkernel: cpu12:2098134)iscsi_vmk: iscsivmk_ConnRxNotifyFailure:1231: vmhba64:CH:0 T:1 CN:0: Connection rx notifying failure: Failed to Receive. State=Online
2024-02-13T21:11:22.314Z In(182) vmkernel: cpu12:2098134)iscsi_vmk: iscsivmk_ConnRxNotifyFailure:1237: Sess [ISID: 00023d000001 TARGET: iqn.2005-10.org.freenas.ctl:test TPGT: 1 TSIH: 0]
2024-02-13T21:11:22.314Z In(182) vmkernel: cpu12:2098134)iscsi_vmk: iscsivmk_ConnRxNotifyFailure:1238: Conn [CID: 0 L: 192.168.50.13:29923 R: 192.168.50.2:3260]
2024-02-13T21:11:22.314Z Wa(180) vmkwarning: cpu2:2097832)WARNING: iscsi_vmk: iscsivmk_StopConnection:735: vmhba64:CH:0 T:1 CN:0: iSCSI connection is being marked "OFFLINE" (Event:6)
2024-02-13T21:11:22.314Z Wa(180) vmkwarning: cpu2:2097832)WARNING: iscsi_vmk: iscsivmk_StopConnection:739: Sess [ISID: 00023d000001 TARGET: iqn.2005-10.org.freenas.ctl:test TPGT: 1 TSIH: 0]
2024-02-13T21:11:22.314Z Wa(180) vmkwarning: cpu2:2097832)WARNING: iscsi_vmk: iscsivmk_StopConnection:740: Conn [CID: 0 L: 192.168.50.13:29923 R: 192.168.50.2:3260]
2024-02-13T21:11:22.332Z Wa(180) vmkwarning: cpu2:2097836)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.6589cfc00000016819a5371aedeadc58" state in doubt; requested fast path state update...
2024-02-13T21:11:22.332Z Wa(180) vmkwarning: cpu10:2097582)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo:190: Could not get page 83 INQUIRY data for path "vmhba64:C0:T1:L0" - Transient storage condition, suggest retry (195887294)
2024-02-13T21:11:23.105Z Wa(180) vmkwarning: cpu10:2097582)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo:190: Could not get page 83 INQUIRY data for path "vmhba64:C0:T1:L0" - Transient storage condition, suggest retry (195887294)
2024-02-13T21:11:23.107Z Wa(180) vmkwarning: cpu2:2097836)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.6589cfc00000016819a5371aedeadc58" state in doubt; requested fast path state update...
2024-02-13T21:11:24.105Z Wa(180) vmkwarning: cpu10:2097582)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo:190: Could not get page 83 INQUIRY data for path "vmhba64:C0:T1:L0" - Transient storage condition, suggest retry (195887294)
2024-02-13T21:11:24.117Z Wa(180) vmkwarning: cpu2:2097836)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.6589cfc00000016819a5371aedeadc58" state in doubt; requested fast path state update...
2024-02-13T21:11:24.293Z In(182) vmkernel: cpu2:2097836)NMP: nmp_ThrottleLogForDevice:3795: last error status from device naa.6589cfc00000016819a5371aedeadc58 repeated 1280 times
2024-02-13T21:11:24.308Z In(182) vmkernel: cpu0:2097236)ScsiDeviceIO: 4617: Cmd(0x45b926ef1000) 0x2a, cmdId.initiator=0x4307dbd91b40 CmdSN 0x3 from world 2100121 to dev "naa.6589cfc00000016819a5371aedeadc58" failed H:0x5 D:0x0 P:0x0 . Cmd count Active:1 Queued:0
2024-02-13T21:11:25.104Z Wa(180) vmkwarning: cpu8:2097582)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo:190: Could not get page 83 INQUIRY data for path "vmhba64:C0:T1:L0" - Transient storage condition, suggest retry (195887294)
2024-02-13T21:11:25.110Z Wa(180) vmkwarning: cpu2:2097836)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.6589cfc00000016819a5371aedeadc58" state in doubt; requested fast path state update...
2024-02-13T21:11:25.355Z In(182) vmkernel: cpu0:2098177)iscsi_vmk: iscsivmk_ConnNetRegister:1895: socket 0x4313f9382340 network resource pool netsched.pools.persist.iscsi associated
2024-02-13T21:11:25.355Z In(182) vmkernel: cpu0:2098177)iscsi_vmk: iscsivmk_ConnNetRegister:1922: socket 0x4313f9382340 network tracker id 261101650 tracker.iSCSI.192.168.50.2 associated
2024-02-13T21:11:25.606Z Wa(180) vmkwarning: cpu0:2098177)WARNING: iscsi_vmk: iscsivmk_StartConnection:917: vmhba64:CH:0 T:1 CN:0: iSCSI connection is being marked "ONLINE"
2024-02-13T21:11:25.606Z Wa(180) vmkwarning: cpu0:2098177)WARNING: iscsi_vmk: iscsivmk_StartConnection:920: Sess [ISID: 00023d000001 TARGET: iqn.2005-10.org.freenas.ctl:test TPGT: 1 TSIH: 0]
2024-02-13T21:11:25.606Z Wa(180) vmkwarning: cpu0:2098177)WARNING: iscsi_vmk: iscsivmk_StartConnection:921: Conn [CID: 0 L: 192.168.50.13:32692 R: 192.168.50.2:3260]
2024-02-13T21:11:26.106Z In(182) vmkernel: cpu12:2097846)NMP: nmp_ThrottleLogForDevice:3812: last error status from device naa.6589cfc00000016819a5371aedeadc58 repeated 118 times
2024-02-13T21:11:26.106Z In(182) vmkernel: cpu12:2097846)NMP: nmp_ThrottleLogForDevice:3863: Cmd 0xa3 (0x45b904082600, 0) to dev "naa.6589cfc00000016819a5371aedeadc58" on path "vmhba64:C0:T1:L0" Failed:
2024-02-13T21:11:26.106Z In(182) vmkernel: cpu12:2097846)NMP: nmp_ThrottleLogForDevice:3868: H:0x0 D:0x2 P:0x0 Valid sense data: 0x6 0x29 0x7. Act:NONE. cmdId.initiator=0x4538cd71bbc8 CmdSN 0x0
2024-02-13T21:11:26.106Z In(182) vmkernel: cpu8:2097582)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:966: Path (vmhba64:C0:T1:L0) command 0xa3 : Failed with transient error status Transient storage condition, suggest retry. sense data: 0x6 0x29 0x7.
2024-02-13T21:11:26.106Z In(182) vmkernel: cpu8:2097582)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:972: Path (vmhba64:C0:T1:L0) command 0xa3 : Waiting for 20 seconds for the transient error to change before marking the path down
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
I have a dedicated dual 10G NIC (only using one port so far - 192.168.50.13) on the esx host connected to a dedicated 10GB switch, the switch has a dedicated link back down to the TrueNAS at 192.168.50.2. It is on its own vswitch and port group with its own vmkernel port. I also confirm network connection with vmkping

[root@esx03:/var/log] vmkping 192.168.50.2
PING 192.168.50.2 (192.168.50.2): 56 data bytes
64 bytes from 192.168.50.2: icmp_seq=0 ttl=64 time=1.075 ms
64 bytes from 192.168.50.2: icmp_seq=1 ttl=64 time=1.211 ms
64 bytes from 192.168.50.2: icmp_seq=2 ttl=64 time=0.710 ms

Given, I can mount it, I would presume the network path is good. Also, no vlan are set on any port but can be if needed
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Please use the CODE tags on the forum for ease of readability where possible.

Code:
2024-02-13T21:11:24.117Z Wa(180) vmkwarning: cpu2:2097836)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:235: NMP device "naa.6589cfc00000016819a5371aedeadc58" state in doubt; requested fast path state update...
2024-02-13T21:11:24.293Z In(182) vmkernel: cpu2:2097836)NMP: nmp_ThrottleLogForDevice:3795: last error status from device naa.6589cfc00000016819a5371aedeadc58 repeated 1280 times
2024-02-13T21:11:24.308Z In(182) vmkernel: cpu0:2097236)ScsiDeviceIO: 4617: Cmd(0x45b926ef1000) 0x2a, cmdId.initiator=0x4307dbd91b40 CmdSN 0x3 from world 2100121 to dev "naa.6589cfc00000016819a5371aedeadc58" failed H:0x5 D:0x0 P:0x0 . Cmd count Active:1 Queued:0


This seems to suggest there's a network condition or failure causing the traffic not to be passed.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
@dwchan69 , given that Broadcom has just terminated the free ESXi product this may be a good time to look at the alternatives.
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Given this is a network issue, not sure if replacing ESXi with something else would make my problem goes away ;)
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
I tested this on a Windows 11 client over iSCSI, no issue (same HW more less X540-T1 vs X540-T2)! I will test this out next on ESX 7U3 t
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
I have an ESX 7 environment (which a similar NIC, just Dell branded) that I will be testing this out on. Furthermore, I plan to vet this out with a different NIC to see if I can at least narrow down the oirgin
 

dwchan69

Contributor
Joined
Nov 20, 2013
Messages
141
Curiously, is there any special switch configuration (i.e. MTU 9000) one need to do for iSCSI to work properly?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Curiously, is there any special switch configuration (i.e. MTU 9000) one need to do for iSCSI to work properly?
Not for functionality - performance there's a number of best-practices, but iSCSI should at least work as long as there's TCP connectivity in both directions between initiator and target.
 
Top