Best networking configuration for mixed NFS VDI storage, samba file server, and Plex on a FreeNAS Mini?

dbsoundman

Dabbler
Joined
Feb 20, 2021
Messages
26
Hi all, I just recently acquired a FreeNAS Mini. This particular unit has 2 gigabit NICs, as far as I know there's no option to upgrade to 10GBit but I haven't actually explored that too much just yet.

Here's what I'm doing with the device:
  • NFS share for a VDI containing the database and user file data for my Nextcloud VM - I'm currently running this with sync disabled just to get the initial migration done, but going forward I would like to set sync=always; I know this has performance implications
  • SMB share for my "archive" of files - basically user home directories for stuff my wife and I don't need to touch every day
  • Plex media server plugin with my full media library of movies, shows, and music
Currently I have the two NICs set up in an LACP LAGG, but I am upgrading the NIC in my VM host soon, which will give me the ability to dedicate one NIC to be a direct connection between the VM host and the NAS. My thought is to remove the LAGG, set up the dedicated link, and increase the MTU on both sides to 9000. This leaves the second NIC on TrueNAS for all the other functions.

Given that I'm currently running Plex as a VM with local storage, along with Nextcloud and other VMs, all through a single gigabit NIC, I'm thinking the performance will be just fine with the above strategy, but I wanted to know if I was making things more complicated than they needed to be by doing the dedicated link with jumbo frames.

I'm happy to draw up a diagram if needed.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
I think you've got a terminology mixup here, VDI is "Virtual Desktop Infrastructure" and usually implies large numbers of virtual machines cloned out from a base for end-users to connect to remotely. You're running a handful of VMs but it's not a full VDI setup (eg: VMware Horizon) correct?

The Nextcloud VM should be fine to reside on there; an NFS dataset from ESXi will already act as if sync=always is set because ESXi sends all NFS traffic as sync writes, so you could set it back to sync=standard and save a little bit of potential pain, but it's still going to get the brakes put on. Is sharing the NextCloud dataset off the TrueNAS machine directly an option? I recall there used to be a plugin for that.

You should be fine with the dedicated network connection, if you were experiencing contention between the NFS VM traffic and non-NFS from the ESXi host expect some performance gains as well. You do lose the redundancy of course but I imagine it's not a mission-critical scenario.
 

dbsoundman

Dabbler
Joined
Feb 20, 2021
Messages
26
I think you've got a terminology mixup here, VDI is "Virtual Desktop Infrastructure" and usually implies large numbers of virtual machines cloned out from a base for end-users to connect to remotely. You're running a handful of VMs but it's not a full VDI setup (eg: VMware Horizon) correct?

The Nextcloud VM should be fine to reside on there; an NFS dataset from ESXi will already act as if sync=always is set because ESXi sends all NFS traffic as sync writes, so you could set it back to sync=standard and save a little bit of potential pain, but it's still going to get the brakes put on. Is sharing the NextCloud dataset off the TrueNAS machine directly an option? I recall there used to be a plugin for that.

You should be fine with the dedicated network connection, if you were experiencing contention between the NFS VM traffic and non-NFS from the ESXi host expect some performance gains as well. You do lose the redundancy of course but I imagine it's not a mission-critical scenario.
I'm actually on XCP-ng, not ESXi, but I think similar principles apply. I believe XCP-ng stores its virtual disks in the "Virtual Disk Image", or VDI format, but I can't find a handy source on that right now.

I don't know if XCP-ng forces sync writes over NFS; it seemed like it didn't, because when I had sync disabled, traffic was moving along at 40-50 MB/s; once I set sync=always, it's down to like 20 MB/s, but my Nextcloud machine still seems as responsive as it was before so I'll take the performance hit, especially if I can get some back by using said dedicated link.

Currently ALL of my VMs use a single NIC containing 3 tagged VLANs, so it's quite the bottleneck. I'm going to be upgrading the server with a dual 10Gbit SFP module, so I'll move to a 10Gbit fiber connection to my switch for general network connectivity (because why not :)), then I'll put a 1Gbit copper SFP in the other SFP slot for my direct storage line to the NAS. I'll then have basically all Samba traffic on one of the NAS' NICs, then all NFS storage traffic on the other NIC.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Sorry for assuming VMware, my apologies. Disregard all the VDI stuff.

Sync writes are "standard behavior" in NFS - what kind of speeds do you get if you set sync=standard? But for maximum safety you're right to enforce it with sync=always at the dataset level.

Splitting out storage traffic (NFS or iSCSI) from regular LAN is generally considered best practices, unless you have some other means to control network shares or bandwidth allocation on a shared link (per-VLAN or per-virtual-NIC bandwidth limits) so if you're split that way you're all set. I don't believe that jumbo frames will win you anything significant at 1Gbps so if they cause you any grief don't feel bad about not bothering with them.
 
Top