Speed Differences between NFS, iSCSI, and SMB

orddie

Contributor
Joined
Jun 11, 2016
Messages
104
Hello Everyone.

I'm noticing a different between all servers, using different access methods, when accessing the same Truenas Core server running 13.0-U1.1.

The server is an AMD 3700x, 64 GB of memory, 10GBE networking, and 20 disks configured as 10 vDev's. The disks are SSD's. The pool does not have Cache, Log, metadata, dedupe, or hot spares configured.

The connectivity to the server is not routed.
Each ESXi server is an AMD 5600g, 129 GB of memory, and 10 GBE connection.
The windows admin box is a 5900x, 32GB of memory, and 10 GBE connection.
The switch is a Unifi US-16-XG.
Jumbo frames (MTU) have been configured across all systems and on the Unifi US-16-XG

when using the windows host, we see SMB writes over 800MB/s and reads up to 1.14 GB /s
workstation-read.png
workstation-write.png


For NFS and iSCSI testing, i spun up a single VM and placed a disk on each type of Data store. Using Crystal disk mark, we see the below performance for reads and writes. I notice that iSCSI has slow reads but fast writes and NFS has fast reads and slow writes.
iScsi.png
NFS.png


delayed ack has been disabled on the ESXi hosts.

I changed the ESXi IO restrictions from 128 to 512 k today via esxcli system settings advanced set -o /ISCSI/MaxIoSizeKB -i 512

Any idea what I can do to even out the speeds?
 
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Each access protocol has very different characteristics and does perform differently.

There are also huge difference depending on how the tests are done: file sizes, queue depth, caching etc.

iSCSI is better for IOPS
SMB is better for streaming bandwidth
NFS is in between

VMFS has its own characteristics

So, I think similarity is the wrong goal... better to focus on your workload and the performance you need.
One protocol may meet your needs, while others do not.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Please see the important resource below regarding sync write behavior and the defaults for NFS vs iSCSI:


Did you force sync=always on the iSCSI ZVOLs? That's an important place to start when comparing the two protocols, give them a level playing field.

2x Intel 82599ES 10-Gbit Network cards in a 20Gbit LAG

Generally speaking, LAG and iSCSI MPIO don't mix. Even if you're going to run NFS, I try to avoid mixing NAS and SAN (or NFS being used in a SAN-like manner) protocols on the same interfaces. Is adding another dual-port 10GbE possible here?
 
Top