ESXi 6.7u3 2nd VMFS6 issues

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
TrueNAS-12.0-U8 Core
ESXi 6.7u3

Two Pools in the system;
Pool-01 which has a zvol and is presented to the ESXi cluster via iscsi, formatted as a VMFS6 datastore
Pool-02 which has a zvol and is presented to the ESXi cluster via iscsi, unable to be added as a VMFS6 datastore. It appears it does format a GPT partition, just fails to add it. Will accept VMFS5 with no issues


Both pools, zvols, iscsi shares are setup identical. Can't seem to figure out what the issue is here.

I have read the following posts:

But I don't feel either really pertain to me.

vCenter tells me to pull the vmkernel logs, which are posted below.

Not sure if this has any weight but the iscsi shares are being advertised over SFP+ NICs: https://www.amazon.com/gp/product/B0073YUJM0/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

HP 586444-001 NC550SFP dual-port 10GbE​



Code:
2022-03-19T13:42:58.515Z cpu5:2098116)SunRPC: 1099: Destroying world 0x20162e
2022-03-19T13:43:29.515Z cpu0:2098116)SunRPC: 1099: Destroying world 0x201631
2022-03-19T13:44:00.521Z cpu5:2098116)SunRPC: 1099: Destroying world 0x2016b1
2022-03-19T13:45:01.516Z cpu5:2098116)SunRPC: 1099: Destroying world 0x20172a
2022-03-19T13:45:32.515Z cpu4:2098116)SunRPC: 1099: Destroying world 0x201737
2022-03-19T13:46:03.515Z cpu2:2098116)SunRPC: 1099: Destroying world 0x201740
2022-03-19T13:46:11.429Z cpu0:2100822 opID=8567e66e)World: 11950: VC opID l0xvb35o-15109-auto-bnt-h5:70001557-98-29-7987 maps to vmkernel opID 8567e66e
2022-03-19T13:46:11.429Z cpu0:2100822 opID=8567e66e)LVM: 10438: Initialized naa.6589cfc000000fd1fc06a3ae5363f9b1:1, devID 6235dea3-9cc073dc-9255-6cb3115e3476
2022-03-19T13:46:11.513Z cpu0:2100822 opID=8567e66e)LVM: 13563: Deleting device <naa.6589cfc000000fd1fc06a3ae5363f9b1:1>dev OpenCount: 0, postRescan: False
2022-03-19T13:46:11.518Z cpu4:2100822 opID=8567e66e)LVM: 10532: Zero volumeSize specified: using available space (1649248567296).
2022-03-19T13:46:11.537Z cpu4:2100822 opID=8567e66e)WARNING: Vol3: 3208: Datastore/6235dea3-aff6f824-0e54-6cb3115e3476: Invalid CG offset 65536
2022-03-19T13:46:11.537Z cpu4:2100822 opID=8567e66e)FSS: 2350: Failed to create FS on dev [6235dea3-8dd48bc3-1643-6cb3115e3476] fs [Datastore] type [vmfs6] fbSize 1048576 => Bad parameter
2022-03-19T13:46:14.518Z cpu4:2097268)LVM: 16795: One or more LVM devices have been discovered.


(after failing, the partition shows as formatted).
1647700578267.png


1647699701027.png

1647699720232.png

1647699804249.png
 
Last edited:

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Last week I have setup a iSCSI datastore with ESXi 7.0u3, VMFS6 and TrueNAS Core 12.0u8
I use Intel 10G SFP+. It works well.
(at the end I have reverted back to NFS datastore because I think that iSCSI datastores are more complicated to manage and I have see no real difference for performances)

According to the VMware documentation, ESXi 6.7 support VMFS6 very well

but you may try with ESXI 7.0u3, to see if the issue comes from the VMware version.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
I resolved this by placing all disks into one pool and creating multiple vdevs.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
The root cause was likely the "Disable Physical Block Size Reporting" - that needs to be checked for VMware LUNs.

Nope. Not likely.

I had this same issue back in December with TrueNAS 12u7, tried everything, including that, finally found that in order to create a VMFS6 datastore, I had to create the zvol with 16k block size, but the problem is that came with a warning from TrueNAS: 'Recommended block size based on pool topology: 128K. A smaller block size can reduce sequential I/O performance and space efficiency'. I posted a thread on it, but no one felt like replying, so I abandoned the idea of 'upgrading' to TrueNAS, at least for the time being.

If, for some reason, "Disable Physical Block Size Reporting" 'needs' to be checked for TrueNAS, why does it apparently NOT need to be checked for FreeNAS 11.1U7?

So I decided to give it a go with TruNAS 13.0U1.1, but same issue. This time when I try to create a 20TiB volume with 16KiB blocks, the volume creation fails with an 'out of space' error....

Edit: It also was able to create the VMFS6 datastore with 64KiB blocks (without "Disable Physical Block Size Reporting" checked), but the creation of that zvol also carried a 'Recommended block size based on pool topology: 128K. A smaller block size can reduce sequential I/O performance and space efficiency' warning.
 
Last edited:

Murphy1138

Dabbler
Joined
Aug 5, 2022
Messages
15
For me it was the way the records were set to 128k based on the pool recommendation, swapped it to 64k then it formatted with no errors.
 
Last edited:

homer27081990

Patron
Joined
Aug 9, 2022
Messages
321
ESXi is notorious for viewing hardware (virtualized and physical) in an arrogant manner, eg. supposing SATA drives are SAS because of the controller, ignoring HBA modes of RAID controllers and not even trying to identify the controller (those are personal experiences). It would not surprise me that this is an ESXi issue entirelly.
 
Top