Slow Samba performance with cache=none option on client

sotiris.bos

Explorer
Joined
Jun 12, 2018
Messages
56
Hello! I have been digging into Samba performance and I need some clarification.

First of all some system specs:

PC:
AMD Threadripper 2920x - ASRock X399 Taichi
32GB quad channel RAM
Intel X520 10Gb NIC, OM3 fiber
Samsung NVME
Arch Linux 5.1.15 with KDE
Win 10 VM with SR-IOV VF from the Intel X520 NIC. Networking is done through the X520 by the VF passthrough.


NAS:
FreeNAS 11.2 U4
Dell R710 - Dual E5620 Xeons
4x 10TB WD Whites in stripe of mirrors, 38% full
64GB quad channel RAM
Standard Broadcom 1Gb NIC


Switch: Mikrotik CSS326

I was accessing my FreeNAS Samba share through KDE's Dolphin (Dolphin mounting the share itself) and I was getting slow read/write speeds, so I ran some tests. I am the only one accessing the FreeNAS box and no other operations were running. This was done multiple times, copying back and forth the same 2.0GB mkv file.

Results taken from both PC and FreeNAS NICs:

Dolphin mounting the share: 76MB/s with dips to 45 - 55MB/s
Nautilus/Files mounting the share: 87MB/s, no dips
scp: 111 - 112MB/s
Windows 10 KVM VM with SR-IOV: 107 - 108MB/s

At this point I was baffled, so I tried mounting the share manually and got these results:

Dolphin with mount -t cifs //myserver.mylocaldomain/media /mnt/myserver/media -o vers=3.0,uid=1000,gid=1000,credentials=/etc/samba/credentials/myserver: 112MB/s

Dolphin with mount -t cifs //myserver.mylocaldomain/media /mnt/myserver/media -o vers=3.0,cache=none,uid=1000,gid=1000,credentials=/etc/samba/credentials/myserver: 45 - 65MB/s

I know this is a FreeNAS and not a Linux forum but the only answer I got on Linux forums was "Use NFS", so I though I would try here to see if I can get an explanation. I don't want to use NFS since I have a mixed Windows/Linux environment. I also don't want to have to deal with two protocols.

Why is performance halved when disabling read/write cache on the mount.cifs module? Everything in the chain (NVME, NICs, network, ZFS stripe) should at least achieve Gigabit speeds even without caching, unless there is something else going on as well. The array can clearly do 180MB/s+ sequential writes since that is what I was getting when migrating the data from my 1x 8TB WD Red pool.

Is this something like the NFS sync option? Is there any other way to disable write caching on the Linux client to the NAS without losing performance?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
Hello! I have been digging into Samba performance and I need some clarification.
Is this something like the NFS sync option? Is there any other way to disable write caching on the Linux client to the NAS without losing performance?

Probably not. You will potentially see similar performance issues if you disabled oplocks on a Windows client. Why aren't you using cache=strict?
cache=strict means that the client will attempt to follow the CIFS/SMB2 protocol strictly. That is, the cache is only trusted when the client holds an oplock. When the client does not hold an oplock, then the client bypasses the cache and accesses the server directly to satisfy a read or write request. By doing this, the client avoids problems with byte range locks. Additionally, byte range locks are cached on the client when it holds an oplock and are "pushed" to the server when that oplock is recalled.
This seems like a sensible and safe setting.
 
Top