5mall5nail5
Dabbler
- Joined
- Apr 29, 2020
- Messages
- 14
Hi all -
I have two ESXi hosts in a cluster that had been pointing to this same physical TrueNAS/FreeNAS box (was FreeNAS, then TrueNAS core, now TrueNAS Scale) which has Chelsio 10 GbE NICs setup in separate VLANs for MPIO and the same with the ESXi hosts each having two vmkernels in matching VLANs as the TrueNAS scale host for MPIO. This was operational for like..... 5 years on various versions of FreeNAS/TrueNAS core. I've now upgraded to TrueNAS Scale and it worked... until I rebooted my hosts.
First, I noticed my rebooted ESXi host came up and the two paths to the iSCSI devices were marked dead. I had setup claimrules targeting TrueNAS iSCSI Disks to uses iops policy and iops=1 for VMware best practive round robin (this worked great in FreeNAS/TrueNAS core). The claimrule persisted after the TrueNAS Scale upgrade, but the hosts had not rebooted so I guess it just reconnected and things were working. Once I rebooted, paths down. I removed the claimrules, rebooted hosts, and I have paths back! I had storage vmotion'd my VMs off from a host yet to be rebooted and got them onto NFS storage outside of TrueNAS Scale. I deleted my zvol, pool, iscsi extents, etc. and rebuilt the storage pool and while I now see two paths to the device, when I go to create a VMFS6 datastore in vCenter I get:
In vmkernel.log on an affected host I see:
I've played with presenting it as 512 logical block size, disable physical block size reporting, and my extents are enabled, etc. I see the device, just cannot work with it - so odd.
Update: I forgot TrueNAS saves boot environments. I simply reverted to my TrueNAS Core environment, of course my pools were offline, recreated those, iSCSI extents/etc. and it works as expected. So something up in TrueNAS Scale with iSCSI and vSphere.
Thanks for reading all!
I have two ESXi hosts in a cluster that had been pointing to this same physical TrueNAS/FreeNAS box (was FreeNAS, then TrueNAS core, now TrueNAS Scale) which has Chelsio 10 GbE NICs setup in separate VLANs for MPIO and the same with the ESXi hosts each having two vmkernels in matching VLANs as the TrueNAS scale host for MPIO. This was operational for like..... 5 years on various versions of FreeNAS/TrueNAS core. I've now upgraded to TrueNAS Scale and it worked... until I rebooted my hosts.
First, I noticed my rebooted ESXi host came up and the two paths to the iSCSI devices were marked dead. I had setup claimrules targeting TrueNAS iSCSI Disks to uses iops policy and iops=1 for VMware best practive round robin (this worked great in FreeNAS/TrueNAS core). The claimrule persisted after the TrueNAS Scale upgrade, but the hosts had not rebooted so I guess it just reconnected and things were working. Once I rebooted, paths down. I removed the claimrules, rebooted hosts, and I have paths back! I had storage vmotion'd my VMs off from a host yet to be rebooted and got them onto NFS storage outside of TrueNAS Scale. I deleted my zvol, pool, iscsi extents, etc. and rebuilt the storage pool and while I now see two paths to the device, when I go to create a VMFS6 datastore in vCenter I get:
I know TrueNAS Scale is still young, but kind of surprised because my storage had been working after the reboot of TrueNAS Core into TrueNAS Scale, but here I am. I have iSCSI working to a synology from the same ESXi hosts and nothing has changed in vmnic/vmkernel/iscsi configuration on the ESXi host side and I have not changed ESXi versions in any manner between it having worked and not.An error occurred during host configuration. Operation failed, diagnostics report: Unable to create Filesystem, please see VMkernel log for more details: Failed to create VMFS on device naa.6589cfc0000007f4d46f249100690cf3:1
In vmkernel.log on an affected host I see:
...2022-10-07T13:07:35.694Z cpu34:2097833)NMP: nmp_ResetDeviceLogThrottling:3776: Error status H:0x0 D:0x2 P:0x0 Sense Data: 0x5 0x24 0x0 from dev "naa.6589cfc0000007f4d46f249100690cf3" occurred 1 times(of 0 commands)
2022-10-07T13:11:35.702Z cpu36:2097503)StorageDevice: 7059: End path evaluation for device naa.6589cfc0000007f4d46f249100690cf3
2022-10-07T13:35:44.926Z cpu40:2100048)LVM: 7076: Failed to probe the VMFS header of naa.6589cfc0000007f4d46f249100690cf3:1 due to No filesystem on the device
I've played with presenting it as 512 logical block size, disable physical block size reporting, and my extents are enabled, etc. I see the device, just cannot work with it - so odd.
Update: I forgot TrueNAS saves boot environments. I simply reverted to my TrueNAS Core environment, of course my pools were offline, recreated those, iSCSI extents/etc. and it works as expected. So something up in TrueNAS Scale with iSCSI and vSphere.
Thanks for reading all!
Last edited: