TrueNAS-SCALE-22.02.4 iSCSI w/ vSphere 7 hosts fail to create VMFS/path issues

5mall5nail5

Dabbler
Joined
Apr 29, 2020
Messages
14
Hi all -

I have two ESXi hosts in a cluster that had been pointing to this same physical TrueNAS/FreeNAS box (was FreeNAS, then TrueNAS core, now TrueNAS Scale) which has Chelsio 10 GbE NICs setup in separate VLANs for MPIO and the same with the ESXi hosts each having two vmkernels in matching VLANs as the TrueNAS scale host for MPIO. This was operational for like..... 5 years on various versions of FreeNAS/TrueNAS core. I've now upgraded to TrueNAS Scale and it worked... until I rebooted my hosts.

First, I noticed my rebooted ESXi host came up and the two paths to the iSCSI devices were marked dead. I had setup claimrules targeting TrueNAS iSCSI Disks to uses iops policy and iops=1 for VMware best practive round robin (this worked great in FreeNAS/TrueNAS core). The claimrule persisted after the TrueNAS Scale upgrade, but the hosts had not rebooted so I guess it just reconnected and things were working. Once I rebooted, paths down. I removed the claimrules, rebooted hosts, and I have paths back! I had storage vmotion'd my VMs off from a host yet to be rebooted and got them onto NFS storage outside of TrueNAS Scale. I deleted my zvol, pool, iscsi extents, etc. and rebuilt the storage pool and while I now see two paths to the device, when I go to create a VMFS6 datastore in vCenter I get:

An error occurred during host configuration. Operation failed, diagnostics report: Unable to create Filesystem, please see VMkernel log for more details: Failed to create VMFS on device naa.6589cfc0000007f4d46f249100690cf3:1
I know TrueNAS Scale is still young, but kind of surprised because my storage had been working after the reboot of TrueNAS Core into TrueNAS Scale, but here I am. I have iSCSI working to a synology from the same ESXi hosts and nothing has changed in vmnic/vmkernel/iscsi configuration on the ESXi host side and I have not changed ESXi versions in any manner between it having worked and not.

In vmkernel.log on an affected host I see:

2022-10-07T13:07:35.694Z cpu34:2097833)NMP: nmp_ResetDeviceLogThrottling:3776: Error status H:0x0 D:0x2 P:0x0 Sense Data: 0x5 0x24 0x0 from dev "naa.6589cfc0000007f4d46f249100690cf3" occurred 1 times(of 0 commands)
2022-10-07T13:11:35.702Z cpu36:2097503)StorageDevice: 7059: End path evaluation for device naa.6589cfc0000007f4d46f249100690cf3
...
2022-10-07T13:35:44.926Z cpu40:2100048)LVM: 7076: Failed to probe the VMFS header of naa.6589cfc0000007f4d46f249100690cf3:1 due to No filesystem on the device

I've played with presenting it as 512 logical block size, disable physical block size reporting, and my extents are enabled, etc. I see the device, just cannot work with it - so odd.

Update: I forgot TrueNAS saves boot environments. I simply reverted to my TrueNAS Core environment, of course my pools were offline, recreated those, iSCSI extents/etc. and it works as expected. So something up in TrueNAS Scale with iSCSI and vSphere.

Thanks for reading all!
 
Last edited:

ericsmith881

Dabbler
Joined
Mar 26, 2021
Messages
29
Did this ever get resolved? I'm testing TrueNAS Scale 22.12, doing a fresh install to replace my prior TrueNAS Core. I'm having the exact same problem. No matter what record size I choose, no matter whether "disable physical block size reporting" is enabled or otherwise, VMWare will not format this as a VMFS6 datastore. Is this broken in Scale?
 
Last edited:

5mall5nail5

Dabbler
Joined
Apr 29, 2020
Messages
14
Did this ever get resolved? I'm testing TrueNAS Scale, doing a fresh install to replace my prior TrueNAS Core. I'm having the exact same problem. No matter what record size I choose, no matter whether "disable physical block size reporting" is enabled or otherwise, VMWare will not format this as a VMFS6 datastore. Is this broken in Scale?
I never got it to work and never figured it out. Tried everything that I could. Check your vmkernel logs and you'll see the same errors I bet. I rolled back to TrueNAS Core.
 

blitztata

Cadet
Joined
Apr 3, 2023
Messages
1
I'v got the same problem with TrueNAS Scale 22.12.1. I recreate the zvol,set default zvol blocksize from 128K to 64K solve this problem.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
A little bit late to the party here, but users with SATP claimrules may have issues with the CORE -> SCALE upgrade path due to the switch in iSCSI targets (similar to the 9.2 -> 9.3 shift from istgt to ctl back in the day)

Both CORE and Enterprise support ALUA so the claimrule routing them to NMP plugin VMW_SATP_ALUA is fine, but SCALE presents an issue for the moment because it will return an error to ALUA queries (until it's supported, which is hopefully soon) - it needs to run under VMW_SATP_DEFAULT_AA or _AP to use the default non-ALUA plugin. Delete and recreate the claim rules on one host, reboot it, and it should pick things up.
 

sash

Dabbler
Joined
Jun 12, 2017
Messages
14
I had a similar problem trying to add iSCSI datastore to latest 8.0.2 ESXi host from latest TrueNAS-SCALE-23.10.1.3.
Solution was to reduce default block size on Zvol from 128KiB to 64KiB. Then I was able to use VMFS6.
On a side note, VMFS5 does not have this issue.
 
Last edited:

Jaehoon Choi

Cadet
Joined
Jul 5, 2020
Messages
2
1708788172566.png


If you want to use 128KB zvol or above, when you create Extent, you must check Disable Physical Block Size Reporting.
Then you create VMFS6 on ESXi 7.0.x or above...:)
 
Top