iSCSI or NFS for VMs on another host?

Status
Not open for further replies.

everyman

Dabbler
Joined
Jun 24, 2016
Messages
14
Our FreeNAS will be used for two main purposes:
  • File sharing for workstations
  • Boot/system storage for VMs (mix of Linux and Windows)
What is the received wisdom for configuring the VMs and host:
  • iSCSI to host, host allocates to VMs
  • NFS to host, host allocates to VMs
  • iSCSI to VMs
  • NFS to VMs
  • Something else
Each has pros and cons. My inclination is to create one large iSCSI extent for the host and let it manage individual “volumes” for the guests.

On the downside, this complicates backups/snapshots, where NFS might make more sense. Not a huge consideration if we rsync guests and workstations.

Third possibility is to add more disks and create a separate zvol to share with the host over iSCSI. Possibly the optimal solution but requires budget I don’t have at the moment.

What have I missed/got right?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I do exactly this at home. I have two pools... my 12-disk striped mirror array with L2ARC and SLOG, which presents storage to my VMs via NFS (one NFS mount shared by all 3 nodes) and via iSCSI (used for connections directly to the VM guests where the content is shared in an active/passive failover cluster), and my 6-disk RAID-Z2 array for file storage. I would recommend similar for you - your choice of NFS vs iSCSI based on your experience with the two protocols.

Whatever you use for block storage (your VMs), you need to make sure the pool is designed properly. Striped mirrors, not RAID-Z... a SLOG... plenty of RAM in the box... etc.

Please post detailed specs of your system.
 

everyman

Dabbler
Joined
Jun 24, 2016
Messages
14
System config:
  • Intel C612 motherboard (FreeNAS certified)
  • 6-core 1.7GHz Xeon processor
  • 32GB RAM (4x8GB DDR4)
  • 240 GB L2ARC SSD
  • Striped mirrors (8x6TB drives + 1 spare, 24TB total capacity)
  • quad-port 1GB NIC bonded as lagg0
What's missing, I realize, is the ZIL/SLOG SSD. When we configured and ordered the system we believed it would be used primarily for CIFS/SMB shares to our workstations. Of course, things change in a month or two; now we have a need for those unforeseen NFS shares and iSCSI extents.

In the event this question may be moot anyway: using iSCSI results in frequent (every 10 seconds) "ctld: read: connection lost" messages on the FreeNAS console, similar on the VM host console. It seems to be harmless, but I'd rather not spam the consoles....

I'm planning to install a second SSD for the Intent Log, but I think that's unlikely to stop the "connection lost" messages. Pity, because my perception is that VM performance is better over iSCSI than NFS.

If anybody has suggestions to help me find my way, I'm all ears.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
What version of FN are you running? I seem to recall a bug causing that message in the 9.3 time frame.

If it's not the bug, it's likely a layer 1 issue. iSCSI doesn't always play nicely with laggs... try dropping to a single NIC to see if the problem persists.
 

everyman

Dabbler
Joined
Jun 24, 2016
Messages
14
We're running FreeNAS 11.1-U1. (It's a new system purchased from ixSystems in January.)

Perhaps the solution is to drop one NIC out of lagg0 and use it for iSCSI. Worth a try.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The simplest answer is that there's something happening at layer 1 or 2 causing the disconnects. Getting rid of the lagg, ensuring you're using good cables, a good switch (cheap switches often don't tolerate the load iSCSI can generate), etc. is the first step.
 

everyman

Dabbler
Joined
Jun 24, 2016
Messages
14
Re-configured the lagg to use 3 ports; listening for iSCSI on the 4th. No joy -- still seeing the 10-second disconnects. Everything in there is new, but it's probably worth swapping a cable, maybe even moving one port on each box to a separate switch to isolate the iSCSI traffic.
 
Status
Not open for further replies.
Top