VMware: iSCSI or NFSv4

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
==== This is a Lab environment

Currently running 3 ESXi hosts connected via NFSv3 connected via 10GBE on each host and a 2x 10GBE LAG on truenas. I do not have performance issues.

Recently a vendor came in and deployed a new hyper converged system that runs off NFSv3 and 8k block.

reading more about where vmware is going, looks like iSCSI or NFSv4 are the ways to go.

How is TrueNas hardware deployed (what protocol)?
What is everyone else using?
Am I expected to see performance improvements by swapping?
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
As far as I know most sysadmins still use NFSv3 datastores because they are really easier to manage than the others.
 

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
does anyone know how XIsystems are deployed?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Don't change anything just for the sake of changing it. There's no indication that VMware is deprecating NFSv3, so unless you need the NFS multipath support (and you said your performance seems to be fine now) then stick with it. (You actually lose out on a couple features, like Storage I/O Control (SIOC) if you go with NFSv4.)

does anyone know how XIsystems are deployed?

Commercial TrueNAS systems would be deployed "as per customer request" whether that be NFSv3, v4, iSCSI, or FC. Generally one adapts the hardware into your current environment rather than changing the entire SAN to accomodate, unless you're doing a greenfield/brownfield exercise.

Your current environment suggests that NFSv3 is perfectly fine though, other than the obvious "lack of redundancy" to your hosts. If you have access to Network I/O Control (NIOC) you might consider setting that up as well even just as a simple "50% minimum guaranteed shares for the NFS vmk" to ensure that there's always room for storage traffic.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
i've been back and forth on this for awhile and always been a nfs fan...but we're taking a serious look at iscsi on fn11.3 (not the new truenas) so that we can run thick disks vs thin. obviously we use this for virtual machines. I'm running 64k blocks and may go down to 32k on the pool, running default zvol settings. I had high hopes for the meta pool, I had two 800gb p3700's and a 2tb p3700 for l2arc, I just put them in a stripe for fn11.
 

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
i've been back and forth on this for awhile and always been a nfs fan...but we're taking a serious look at iscsi on fn11.3 (not the new truenas) so that we can run thick disks vs thin. obviously we use this for virtual machines. I'm running 64k blocks and may go down to 32k on the pool, running default zvol settings. I had high hopes for the meta pool, I had two 800gb p3700's and a 2tb p3700 for l2arc, I just put them in a stripe for fn11.

why would you change the block size? Its an open question why the vendor chose 8k.

are you saying with NFS, you can not thin provision?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
why would you change the block size? Its an open question why the vendor chose 8k.

volblocksize on ZFS is a maximum. Your HCI/storage vendor may handle things differently. But in very general terms, smaller block/record sizes on a LUN or dataset can lead to better random I/O latencies, at the cost of worse compression and lower transfer rates due to increased overhead. Adjusting the block size to the workload can be beneficial. 32K is a good "general-purpose compromise" for VMDKs.

are you saying with NFS, you can not thin provision?

NFS defaults to thin and requires VAAI support for fully-allocated disks, I don't believe TrueNAS supports it. iSCSI on ZVOLs supports the full VAAI suite including UNMAP if done on sparse ZVOLs.
 

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
volblocksize on ZFS is a maximum. Your HCI/storage vendor may handle things differently. But in very general terms, smaller block/record sizes on a LUN or dataset can lead to better random I/O latencies, at the cost of worse compression and lower transfer rates due to increased overhead. Adjusting the block size to the workload can be beneficial. 32K is a good "general-purpose compromise" for VMDKs.



NFS defaults to thin and requires VAAI support for fully-allocated disks, I don't believe TrueNAS supports it. iSCSI on ZVOLs supports the full VAAI suite including UNMAP if done on sparse ZVOLs.
Can I make these changes at any point or does the pool need to be redone?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Can I make these changes at any point or does the pool need to be redone?
recordsize can be changed at any time, but will only affect newly written (or updated) data.

For zvols, unfortunately volblocksize is immutable. You can create new ones with the desired value and migrate your data at the hypervisor level however.

Which vendor are you working with, if you can share? Their concept of "record size" might have different implications compared to ZFS.
 

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
recordsize can be changed at any time, but will only affect newly written (or updated) data.

For zvols, unfortunately volblocksize is immutable. You can create new ones with the desired value and migrate your data at the hypervisor level however.

Which vendor are you working with, if you can share? Their concept of "record size" might have different implications compared to ZFS.
Cisco Hyperflex. Not the traditional SAN or a FreeNas install.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Cisco Hyperflex. Not the traditional SAN or a FreeNas install.

From what I can glean from documentation, Cisco HX "recordsize" is fixed. They recommend 4K for VDI deployments but 8K for other general-purpose use. They also do compression and deduplication - claimed "inline" but I imagine it actually happens during destage from cache.
 
Top