Best way to use FreeNAS pool as HyperV VM storage

ttoast4276

Cadet
Joined
Apr 19, 2020
Messages
2
What's the best method to store/share Hyper V VM's on FreeNAS?
  • I'm seeing ISCSI and SMB as potential options.
  • ISCSI seems to not be ideal for the fact that it's a whole additional filesystem on top of ZFS
  • SMB - Having permission issues with Hyper V (Which seems to be a common Hyper V thing)
    • Does having a Windows domain solve this issue?
  • Is SMB/ISCSI better from a power loss/data loss perspective?

Background:
  • Hardware - (i5, 8GB, 3x Seagate Ironwolf Pro 12TB, 2x 128 Samsung 850 Pro SSD, Intel quad port NIC)
    • Hoping to aggregate the links either through SMB or LACP
    • Currently just a proof of concept. Will upgrade hardware once this works.
  • Lab environment (eg basic hardware, gigabit network, not production etc)
  • Standalone Freenas setup (no AD integration etc)
  • Two Standalone HyperV hosts (not domain joined)
  • No domain network
  • 10 VM's, mostly Windows Server 2019

How would you recommend accessing FreeNAS for Hyper V VM's?

Thanks in advance for your help!
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey TToast,

Here, I use iSCSI and would not rely on SMB for something like that. Such a crapy protocol designed for the 1990s... Read about sync write in the forum, you will see the difference between NFS (same case for SMB) and iSCSI.

Should your hypervisor need to write something to a big file (like ESXi writting in a VMDK file), the sync write will be painful. It is required to be safe, but so slow. With iSCSI, there is no problem like that because iSCSI never writes big files. It only writes small sectors.

Datastore without sync write is a No-Go for me.
Datastore slowed down by NFS (or SMB) using sync write is also a No-Go for me.
That is why I have 4x 1GB iSCSI links configured between my ESXi and my FreeNAS. And thanks to FreeNAS's performance, that storage is much faster than ESXI's local storage.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
This, a LAG (LACP) will try to load balance by hash, which means that one client talking to one server only ever uses one link. There are ways around that - L4 hash support in the switch and multiple connections - but that’s more expensive switching gear.

By contrast, MPIO on iSCSI is simple and will perform. Here is a setup guide: https://johnkeen.tech/freenas-11-2-and-esxi-6-7-iscsi-tutorial/

Keep in mind that ZFS performs best when under 50% full in the case of block storage such as VMWare access.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
This, a LAG (LACP) will try to load balance by hash,

Sorry @Yorick but you got it wrong here...

These links are not grouped by LACP. They are fully independent at layer 2 and 3. The load is balanced by the iSCSI protocol itself. Here, I chose to load balance on a per packet basis and at every 1 packet. So that means iSCSI will send a packet on a link and then switch to another one for the second packet, to a third one for the third packet, the last one for the fourth packet before coming back on the first link for the 5th packet.

Thanks to that load balancing, the 4x 1G balanced by iSCSI is much better than LACP.
 

ttoast4276

Cadet
Joined
Apr 19, 2020
Messages
2
Thanks both of you for your input! Great to know that ISCSI is the way to go.

So is looking into MPIO iSCSI the way to go rather than LACP?
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
The load is balanced by the iSCSI protocol itself

That was my point. OP had talked about doing a LAG and I am suggesting that independent links with MPIO is a better choice. We're saying the same thing, just not realizing it :).
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
That was my point. OP had talked about doing a LAG

Ok. First time I saw your reply, I thought that you were replying to me and not to the original post.

Indeed, we recommend the same path here.

Have a nice day,
 
Top