Storage setup for Hyper-V VM

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Good Afternoon Everyone,

I notice that you cannot iscsi the same share into different machines at the same time, you can only bind a share to 1 machine at a time via disconnect/connect as far as I can tell. When I tried too connect the same share on different machines I was getting problems.

So then which share method should I use so that multiple hosts can access the same VM data. I'm trying to load VM data inside my vdev from different hosts.?

My storage pool setup is:
-MainPool
--vdev Svr1
--vdev Svr2
--vdev Srv3

And I need the MainPool accessible from different nodes/hosts, to load the files and turn the machines on in Hyper-V. Should I not use Iscsi?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @UserSN,

When working with ESXi, iSCSI and NFS are the two first choices. Luckily for me, I got rid of Microshit products over 12 years ago. The consequence is that I never used Hyper-V.

Are you using 3 single drive vDev in your pool ? If you do, know that this is extremely high risk and should not be done for anything valuable.

Also, you probably need to learn the difference between Pool, Dataset, zVol and vDev. I suggest you do some search on go in the Resources section to learn more about the basics of ZFS.
 

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Hello @Heracles

Thanks for your quick response.

I have a Pool comprised of 6 disks.
Inside this pool I have 4 zvol's and I created an Iscsi link for each zvol independent of each other. So when I Isci from windows I see my 4 zvols separately (initially I though that I could iscsi the same zvol from many machines) but Ideally I would have my VM data all in 1 area that can be accessed from any node in my network.

I've used Datasets for SAMBA shares but I didn't think that was the proper route for storing/sharing VM files.
Any guidance you can offer is much appreciated!
 
Last edited:

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
I had created 4 zdev's inside my pool as initially I was planning on simply hosting the root directory files of my IIS within each VM IIS Server. I ended up actually exporting the VM files from Hyper-V into these zvols instead as a test.

Doing this I realized I cannot load import my Hyper-V VM's into different nodes because I must disconnect from NodeA, in order to load a given VM into NodeB.

I'd imagine If I have 5 nodes, then I could ISCSI "DriveX" into all 5 nodes & import my VM's into the node I want. Each node having access to the VMData/VHD files inside DriveX. When I tried doing this, Node1 couldn't see the files in DriveX if DriveX was ISCSI'd into Node2.
 

XPystchrisX

Cadet
Joined
Feb 21, 2017
Messages
2
For iSCSI to be available from all nodes you need to have ClusterShared Volumes enabled at the cluster level. Much like ZFS isn't a clustered filesystem, NTFS and ReFS aren't clustered file systems either.
If you're not working with a Windows Failover cluster then you might be able to do something like make an SMB share and grant each machine's account access, but same situation will apply, the different machines may step on each other's toes when it comes to reading and writing in different sections of the disk.

To back track a bit, are you absolutely set on using a Hyper-V cluster? If so do you have a Domain Controller off-cluster already? If these questions are foreign to you then you might be better off using something like Proxmox or XCP-ng to create your cluster, or assign resources and skip Hyper-V. Hyper-V makes some assumptions about your environment that are fantastic for people who are Microsoft Centric, but if that's not the case then you'd be better served by another solution.
 

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
I need Microsoft as 80% of my applications run of .net, unless there's something else I'm missing.

I've virtualized the domain controller but have 3 of them split across nodes for security. Ideally I would have a physical machine as the primary controller but can't do that at the moment. To ensure I don't get locked out if somehow all my 3 nodes go down I've not added the host nodes themselves to my active directory so I can still login via the primary administrator account on the machine itself. That in of itself limits how deep I can utilize microsoft's clustering mechanisms as to be completely automated the physical nodes must be in the domain.

What I ended up doing is creating 8 IIS VM's and 2 IIS/ARR/NLB VM's, ideally my 2 (load balancer) VM's would point traffic to my 8 IIS VM's, having 4/4 pointing to the same directory in FreeNAS, 4 individual Vdev's or Pools not sure what would be better. But if 2 VM's can't talk to the same data my only option is using replication and creating 8vdev's but I see that as a massive waste of disk space imo, there must be a better way. I'm ok with having to manually failover if necessary but would like to build as much fault-tolerance/security as possible in the most automatic way possible within my limitations.

I'm splitting my VM's across 6 nodes to be as fault tolerant as possible. If you have some suggestions I'm all ears. Finding out about ISCSI this deep into the build is a bit rough but I have to find some kind of work-around.
 
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again,

I need Microsoft as 80% of my applications run of .net, unless there's something else I'm missing.

Well, it looks like you did... The hypervisor does not care at all about .NET. It is there to run operating systems and emulate hardware. Not to run applications. Go with something like ESXi, virtualize the Windows systems you need and run your .NET from these Windows. No need to have Hyper-V as an hypervisor for that.

Another option would be to invest in a good firewall like pfSense. You then deploy an HA server in Amazon AWS or similar, configure OpenVPN to call back home and you are good. Should you need redundancy also for the link, just deploy 2 firewalls, on 2 different sites and do a VPN between the 3 points. Here, I do something like that for my PI-Hole. Site A and Site B are running PI-Hole in Docker containers. I have a third one in AWS on a free instance. OpenVPN is configured and call back to both Sites. IPSec is also used for linking Sites A and B. That way, in normal time, there are 3 PI-Holes available. Any single failure will drop that to 2 and a last PI-Hole will still survive after a second failure.
 

UserSN

Dabbler
Joined
Jul 23, 2020
Messages
41
Hi Hercales,

I misunderstood your post, I am aware there are other hypervisors out there I did look into ESXi but at this point I'm pretty much set with Hyper-V & I'm not a fan of any cloud-based solutions in general, especially for something like firewall security. I'm looking into what hardware options I have available for load-balancing as I don't think a VM ARR machine will cut the mustard.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Couple thoughts...

Cluster your Hyper-V hosts; but keep your domain controllers out of the cluster and using local storage (or at least non-clustered LUNs); have at least 2 or 3 of them in case one toasts, you can seize the roles. That way if you do lose power you don't end up with your cluster service stalling out because AD isn't running, because your AD VMs depend on the cluster service running in order to start. The rest of your VMs, cluster them and put them on shared storage so you have failover/fault-tolerance. I'd also make those DCs not your business DC's, set them up strictly for management of the cluster, it's services, and just enough admin accounts that you can manage it. Maybe...MAYBE...build a trust relationship to that management domain; but otherwise I'd just build a management box to use for management so you're not doing management from a domain controller.

In my work's case, we have regional datacenters, so we cluster everything and have the domain controllers in multiple facilities; plus out datacenters are hardened and have multiple backup generators
 
Top