sotiris.bos
Explorer
- Joined
- Jun 12, 2018
- Messages
- 56
Hello! I have been digging into Samba performance and I need some clarification.
First of all some system specs:
PC:
AMD Threadripper 2920x - ASRock X399 Taichi
32GB quad channel RAM
Intel X520 10Gb NIC, OM3 fiber
Samsung NVME
Arch Linux 5.1.15 with KDE
Win 10 VM with SR-IOV VF from the Intel X520 NIC. Networking is done through the X520 by the VF passthrough.
NAS:
FreeNAS 11.2 U4
Dell R710 - Dual E5620 Xeons
4x 10TB WD Whites in stripe of mirrors, 38% full
64GB quad channel RAM
Standard Broadcom 1Gb NIC
Switch: Mikrotik CSS326
I was accessing my FreeNAS Samba share through KDE's Dolphin (Dolphin mounting the share itself) and I was getting slow read/write speeds, so I ran some tests. I am the only one accessing the FreeNAS box and no other operations were running. This was done multiple times, copying back and forth the same 2.0GB mkv file.
Results taken from both PC and FreeNAS NICs:
Dolphin mounting the share: 76MB/s with dips to 45 - 55MB/s
Nautilus/Files mounting the share: 87MB/s, no dips
scp: 111 - 112MB/s
Windows 10 KVM VM with SR-IOV: 107 - 108MB/s
At this point I was baffled, so I tried mounting the share manually and got these results:
Dolphin with
Dolphin with
I know this is a FreeNAS and not a Linux forum but the only answer I got on Linux forums was "Use NFS", so I though I would try here to see if I can get an explanation. I don't want to use NFS since I have a mixed Windows/Linux environment. I also don't want to have to deal with two protocols.
Why is performance halved when disabling read/write cache on the mount.cifs module? Everything in the chain (NVME, NICs, network, ZFS stripe) should at least achieve Gigabit speeds even without caching, unless there is something else going on as well. The array can clearly do 180MB/s+ sequential writes since that is what I was getting when migrating the data from my 1x 8TB WD Red pool.
Is this something like the NFS sync option? Is there any other way to disable write caching on the Linux client to the NAS without losing performance?
First of all some system specs:
PC:
AMD Threadripper 2920x - ASRock X399 Taichi
32GB quad channel RAM
Intel X520 10Gb NIC, OM3 fiber
Samsung NVME
Arch Linux 5.1.15 with KDE
Win 10 VM with SR-IOV VF from the Intel X520 NIC. Networking is done through the X520 by the VF passthrough.
NAS:
FreeNAS 11.2 U4
Dell R710 - Dual E5620 Xeons
4x 10TB WD Whites in stripe of mirrors, 38% full
64GB quad channel RAM
Standard Broadcom 1Gb NIC
Switch: Mikrotik CSS326
I was accessing my FreeNAS Samba share through KDE's Dolphin (Dolphin mounting the share itself) and I was getting slow read/write speeds, so I ran some tests. I am the only one accessing the FreeNAS box and no other operations were running. This was done multiple times, copying back and forth the same 2.0GB mkv file.
Results taken from both PC and FreeNAS NICs:
Dolphin mounting the share: 76MB/s with dips to 45 - 55MB/s
Nautilus/Files mounting the share: 87MB/s, no dips
scp: 111 - 112MB/s
Windows 10 KVM VM with SR-IOV: 107 - 108MB/s
At this point I was baffled, so I tried mounting the share manually and got these results:
Dolphin with
mount -t cifs //myserver.mylocaldomain/media /mnt/myserver/media -o vers=3.0,uid=1000,gid=1000,credentials=/etc/samba/credentials/myserver
: 112MB/sDolphin with
mount -t cifs //myserver.mylocaldomain/media /mnt/myserver/media -o vers=3.0,cache=none,uid=1000,gid=1000,credentials=/etc/samba/credentials/myserver
: 45 - 65MB/sI know this is a FreeNAS and not a Linux forum but the only answer I got on Linux forums was "Use NFS", so I though I would try here to see if I can get an explanation. I don't want to use NFS since I have a mixed Windows/Linux environment. I also don't want to have to deal with two protocols.
Why is performance halved when disabling read/write cache on the mount.cifs module? Everything in the chain (NVME, NICs, network, ZFS stripe) should at least achieve Gigabit speeds even without caching, unless there is something else going on as well. The array can clearly do 180MB/s+ sequential writes since that is what I was getting when migrating the data from my 1x 8TB WD Red pool.
Is this something like the NFS sync option? Is there any other way to disable write caching on the Linux client to the NAS without losing performance?