Unable to mount zvols to multiple hosts

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
I have two esxi 6.7 hosts in a HA cluster. No DR and no vDS licensing.
Each hosts has a dual SPF+ nic cards, a single SFP+ card and the onboard 1gb ethernet.
Each host has a single DAC cable from each of the dual SFP+ cards plugged directly into another SFP+ card that belongs to theTrueNAS device. After creating the iscsi targets on the TrueNAS, I am able to see the iscsi extents / zvols on each esxi host. However, If I mount an extent and create a vmfs on esxi host A, I cannot add the same vmfs volume on host B. I do not own an SFP+ switch so I would like to have 10G direct connection from the hosts to the storage container as if I was emulating HBA eternal cards. I'm more familiar with HBA external and NFS file sharing so this concept of using NICs as HBAs is a good brain teaser. I used this article to help setup the configuration but it appears the author's tutorial only uses one host. In the comments, one person talks how this will work for devices in clusters using vSphere (which I'm have) but I cannot find any support documentation that explains how to accomplish this.

Any help with this would be greatly appreciated.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Please note, for clarity's sake, that these become iSCSI extents at the point they hit the iSCSI protocol; there IS such a thing as ZFS/ZVOL on top of an iSCSI extent, so best not to confuse the issue here. Additionally, your 10G network is ETHERNET, nothing to do with HBA, except that you happen to be using Internet Protocol to transport storage data. But we don't call an ethernet adapter an HBA just because NFS traffic runs over it, and we shouldn't do that for iSCSI either. This is important because there are actually ethernet controllers that do present iSCSI storage controllers in a manner similar to an HBA, but that's relatively arcane and unusual.

Debug it the normal way you'd do for ESXi storage. Make certain that the underlying block device is showing up on all your hosts.

If you are seeing the block device on both ESXi hosts in a direct-attach scenario, your vCenter is unlikely to be able to figure out on its own precisely what is going on, so you may have to do the configuration at the ESXi or per-host configuration level.

Create the VMFS filesystem on one of the ESXi hosts. This will lock out the other ESXi host, which will correctly refuse to create a VMFS filesystem on top of an existing one. Instead, what you want to do is have the other ESXi host rescan for datastores. And pop, magically, it will see the VMFS filesystem and make it available.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
Please note, for clarity's sake, that these become iSCSI extents at the point they hit the iSCSI protocol; there IS such a thing as ZFS/ZVOL on top of an iSCSI extent, so best not to confuse the issue here.
Good to know. I really wasn't sure how to properly describe the issue with the proper words.
Additionally, your 10G network is ETHERNET, nothing to do with HBA, except that you happen to be using Internet Protocol to transport storage data. But we don't call an ethernet adapter an HBA just because NFS traffic runs over it, and we shouldn't do that for iSCSI either.
I understand that. This is why I wrote: "10G direct connection from the hosts to the storage container as if I was emulating HBA eternal cards" I know this is not HBA, by any means. I believe we both meant the same thing, just different lingo. But thanks for the clarification.
Create the VMFS filesystem on one of the ESXi hosts. This will lock out the other ESXi host, which will correctly refuse to create a VMFS filesystem on top of an existing one. Instead, what you want to do is have the other ESXi host rescan for datastores. And pop, magically, it will see the VMFS filesystem and make it available.
This is what was not happening and spent an easy 8 hours trying to figure out why. I even posted in VMware's forum because I couldn't determine if it was an ESXi issue or a TrueNAS issue.
I was able to resolve the problem by factory resetting the network settings on host B. Apparently I gummed something up when making the change of using the 10G nics from NFS to iscsi block storage.

@jgreco If you get a chance, could you put some eyes on the following post or tag someone who would know the best solution? This would help me greatly.

This can be marked as solved / resolved.
 
Last edited:
Top