someone1
Dabbler
- Joined
- Jun 17, 2013
- Messages
- 37
With the announcement that the new experimental iSCSI target validates against a Windows 2012 R2 Failover cluster, I decided to try this with my test Hyper-V 2012 R2 core failover cluster. I can indeed get the cluster to pass validation, however, when I restart any of my two nodes, it does not reconnect properly to the storage backend.
I see mention here https://bugs.freenas.org/issues/5230#note-16 that someone else was able to get this to work, but I've had little to no success for a bare minimum setup, clean/new install.
My configuration:
iSCSI:
- 1 Portal with 3 IPs (10.0.0.1, 10.0.1.1, 10.0.2.1)
- CHAP Authorized Access
- 2 Targets all using the same Portal and Authorized Access
- 2 ZVOL extents (Quorum & Storage)
- 1 target mapped to 1 extent for both targets/extents
- Experimental iSCSI w/ Multithreading
Nodes:
- Node 1 does an MPIO connection to both targets over 10.0.0.1 and 10.0.1.1
- Node 2 does an MPIO connection to both targets over 10.0.0.1 and 10.0.2.1
- 1 target used as Quorum, the other target used as CSV storage for VMs
When I first setup the cluster, everything runs fine. But when I pause a node and do a reboot, it comes back online and is "connected" to the iSCSI targets, however, there are no disks available in disk management. MPIO claims there are no MPIO disks available and the "Devices" menu from iscsicpl has no volume path for any of the "connected" targets. I can still access the clustered storage from node 1 (possibly routing through node 2?). When I restart node 2, the entire cluster goes down with it. When Node 2 comes online, it exhibits similar behavior to that of Node 1. I then have to restart the iSCSI service and reconnect the nodes for everything to come back online properly.
Can anyone replicate this issue or perhaps give me recommendations on how my configuration might be incorrect? Any help would be greatly appreciated!
Thank you.
I see mention here https://bugs.freenas.org/issues/5230#note-16 that someone else was able to get this to work, but I've had little to no success for a bare minimum setup, clean/new install.
My configuration:
iSCSI:
- 1 Portal with 3 IPs (10.0.0.1, 10.0.1.1, 10.0.2.1)
- CHAP Authorized Access
- 2 Targets all using the same Portal and Authorized Access
- 2 ZVOL extents (Quorum & Storage)
- 1 target mapped to 1 extent for both targets/extents
- Experimental iSCSI w/ Multithreading
Nodes:
- Node 1 does an MPIO connection to both targets over 10.0.0.1 and 10.0.1.1
- Node 2 does an MPIO connection to both targets over 10.0.0.1 and 10.0.2.1
- 1 target used as Quorum, the other target used as CSV storage for VMs
When I first setup the cluster, everything runs fine. But when I pause a node and do a reboot, it comes back online and is "connected" to the iSCSI targets, however, there are no disks available in disk management. MPIO claims there are no MPIO disks available and the "Devices" menu from iscsicpl has no volume path for any of the "connected" targets. I can still access the clustered storage from node 1 (possibly routing through node 2?). When I restart node 2, the entire cluster goes down with it. When Node 2 comes online, it exhibits similar behavior to that of Node 1. I then have to restart the iSCSI service and reconnect the nodes for everything to come back online properly.
Can anyone replicate this issue or perhaps give me recommendations on how my configuration might be incorrect? Any help would be greatly appreciated!
Thank you.