Poking the bear.... iSCSI vs NFS?

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
I believe NFS is treated with sync enabled by default, and iSCSI leaves it up to the system that's connecting to it (ie, standard). Also, as I understand NFS, all traffic is pushed through a single host that manages the NFS connection, where iSCSI allows multiple sessions to interact with the LUN; it's up to each server to basically play nice and not corrupt the filesystem.

Most of that is shot from the hip.

In regards to an above post; the IOPS optimization actually would benefit as the NICS would be stripped a bit better for iSCSI. I'm running a pair of 10G channels from 4 hosts, and it's a noticeable difference. It's like saying that 9K MTU isn't needed because you're running 10G, it's less noticeable than say on 1G, but, it's still an optimization that cumulatively can make a difference.
 

vangoose

Cadet
Joined
Aug 31, 2020
Messages
4
I believe NFS is treated with sync enabled by default, and iSCSI leaves it up to the system that's connecting to it (ie, standard). Also, as I understand NFS, all traffic is pushed through a single host that manages the NFS connection, where iSCSI allows multiple sessions to interact with the LUN; it's up to each server to basically play nice and not corrupt the filesystem.

Most of that is shot from the hip.

In regards to an above post; the IOPS optimization actually would benefit as the NICS would be stripped a bit better for iSCSI. I'm running a pair of 10G channels from 4 hosts, and it's a noticeable difference. It's like saying that 9K MTU isn't needed because you're running 10G, it's less noticeable than say on 1G, but, it's still an optimization that cumulatively can make a difference.

I have better performance with NFS using 2 paths (using the same nic and settings as iSCSI MPIO) to truenas 12. I'm not sure how robust it is in 12, but in my test environment, it's rock solid so far.

Clients are 3-node esx cluste with dedicated dual port Mellanox CX-4 LX EN dual 25G ports.
TrueNAS is running as a VM in ESXi 7, with Mellanox CX-5dual 100G ports SRIOV VF.

I'm able to push 5GB/s read, over 2.1GB/s write fom VM in the NFS datastore. pNFS is more efficient than MPIO round robin in terms of bandwidth utilization, like smb multi-channel. I'm nowhere close to that using iSCSI. The disk is SN260 NVME, rated 6GB/s read and 2.2GB/s write. I'm hitting either network limitation or disk performance limitation with NFS. 4K ramdom IO also give NFS slight edge.

And you can't beat NFS for simplicity. In our very large ESX farm (thousands ESX servers), NFS is life saver.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I've been inlove with nfs for years. but since testing iscsi on FN 12 we are seeing quite a performance increase for our terminal servers.

My l2arc is being utilized more effectively. Customers have commented that their terminal servers are all more responsive....

We're trying to figure out why this is exactly, but it's a thing.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I've been inlove with nfs for years. but since testing iscsi on FN 12 we are seeing quite a performance increase for our terminal servers.

My l2arc is being utilized more effectively. Customers have commented that their terminal servers are all more responsive....

We're trying to figure out why this is exactly, but it's a thing.

This might simply be a side effect of "old NFS filesystem" vs "shiny new VMFS filesystem" - even if you svMotion a machine from the NFS datastore to the VMFS, you might get the side effect of "defragging" the VMDK, so it's nice and contiguous there.
 
Top