Pause in Linux+SMB transfers compared to Windows+SMB

Status
Not open for further replies.

mtt122

Cadet
Joined
Apr 9, 2018
Messages
2
I have a FreeNAS system I intend to use as a backup storage. I was trying to get a feel for the file transfer speed entitlement of my hardware when I found a strange (to me at least) difference between Linux+SMB vs Windows+SMB that prevents Linux from matching Windows performance on file transfers to FreeNAS on my system.

My FreeNAS system (standalone, nothing else running on it):
FreeNAS 11.1 - U4
Supermicro X9SCM-F
E3-1240V2
32GB ECC DDR3 1600
Intel 82579LM and 82574 1Gb NICs
WD Red 5400 RPM 4TB drives

Client System:
Linux Mint 18.3 host OS (Ubuntu 16.04 I believe)
Windows 10 VM (KVM/QEMU)
ASUS Server board
dual E5-2670
64GB memory
Intel NICs

All tests were using large seq writes of media files 1GB or larger per file with a total transfer of 20-60GB each time. Used the same dataset in all cases. Share type Unix, compression off, sync=standard.

I am using single disk pools. Yes, I know (and have read) that for most cases this is pointless and defeats some of the ZFS benefits. There is no redundancy, can't fix errors, etc. I do it for the same reason others have (to have scrub capability), home backup system only, not an active file server. I follow 3-2-1 so if I lose a disk I still have the originals plus an offsite copy etc.

Bottom line is that from the Windows 10 VM (SMB 3.1.1) I can get ~108MB/s at least up to the multi-file 60GB total that I tested. The transfer rate stays very steady and smooth throughout the entire transfer. FreeNAS reporting confirms this for both disk and network. This is about the max I would expect from these 5400 RPM drive and 1Gb pipe. When I use samba on the Linux system (I tried SMB 3, 2.1, and 1.0) I can get only 70MB and it is very spikey no matter what I did with mount parms etc.

Note that since this is a WIN10 VM, both WIN10 and Linux are using the same hardware, same NIC, same cable, same switch, network path etc. And is reading the source files from the same physical disk.

I turned on FreeNAS SMB logging and what I see is that the log and timestamps confirm that the WIN10 writes are consistent within a file and file to file transitions. On the other hand, the Linux writes are fast within a file (hitting the same 100MB+) but there is a repeatable ~5 sec delay between files. Top is showing that during writing, smbd is ~25-30% on a single thread (as expected). Between files, smbd drops to zero for about 5 seconds (WIN10 does not do this, smbd is steady). The time stamps in the log (log setting=full) shows the same thing, but there are no errors. For Linux just a ~5 sec (slightly over that) delay from when each and every file gets "opened" and the first writing log entry for that specific file.

If you do the math that 5+ sec delay between file end and next file start, at 1Gb speeds, accounts for the difference between 108GB/s and 70GB/s. So I can clearly see why Linus+SMB is slower, but I don't know what causes that delay between files on FreeNAS when using Linux to connect, but not when using WIN10.

If I had to guess, the time is suspiciously close to the 5 sec txg time. Maybe it is either being flushed, or has to expire. But then again, why only for a Linux SMB connection to the share and not a WIN10 connection?

I am looking into ZFS tunables and may look at newer Ubuntu versions with newer samba versions to see if that makes a difference.

I also looked at NFS, but can't get it above 70MB/s either, even with async,.and it was spiky too,

If anyone knows what might be going on or has any suggests any help would be welcome.

Thanks
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I don’t know if you’re quoting a maximum or average rate for your NFS transfer, but while mounting a NFS share async in Linux allows for clientside buffering, the NFS server defaults to operating in sync mode in FreeNAS. Unlike a NFS server in Linux where this behaviour can be set on individual exports using a sync/async param, in FreeNAS/FreeBSD the behaviour is AFAIK determined by the value of vfs.nfsd.async and is global.

As such, sync writes are being generated on zfs which results in additional writes to the zil ( or SLOG if you have one). You can monitor this using zilsat. So you would expect, by default, transfers over a NFS share to be slower than a SAMBA share which operates in async mode.

You’d expect to speed up transfers using NFS with an async client mount by setting the sync property on shared dataset to disabled which turns off the dataset’s zil. But turning off the zil would be regarded as unsafe as you loose the protection from potential data loss due to server crashes. Neither does it alter the basic transactional nature of zfs which can give you that “spikey” nature of transfers which appear to happen in bursts.

If you wanted to see the possible benefit of using a SLOG, then use a temp RAMDISK. See here:

https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

(No need to change the dirty_data_sync parameter) . The combination of SLOG and an async NFS client may still result in a “bursty” data transfer.

With a single mirror VDEV pool of WD REDS and a gigabit connection, the only way I’ve seen a guaranteed constant steady transfer speed using a NFS share in Linux is to use an explicit sync client mount and a SLOG. Using systat -ifstat 1 on FreeNAS showed this to be a pretty steady 68 MB/s on my microserver. Safe, but slow.

With a single large file, e.g. 50GB vdi, a combination of SLOG and an async NFS client did result in a constant steady transfer speed of 116MB/s.

Transfers speeds for SAMBA shares can vary significantly, say between large number of small files and a small number of large files. On my system, a single large 50GB vdi file transfers at a constant 116MB/s using a CIFS mount in Linux. A transfer of multiple small files shows the same “bursty” behaviour you’ve seen. On my system, tested transfer times are about the same with CIFS and no SLOG as with NFS with an async client mount and a RAMDISK SLOG. All this can depend on the “voodoo” of the exact parameters used in the SAMBA server and shares.

When I’ve compared transferring the same data to a FreeNAS windows share using Win7 and using Linux with CIFS, Win7 appears to be quicker and tends to be at a constant speed. You’d have to dig into the MS smb and CIFS implementations and workings of SAMBA on FreeNAS to understand the difference.

If you find a magic bullet, I'm first in the queue to be told.

P.S. My test where done using a single client and with no concurrent read activity on the pool.
 
Status
Not open for further replies.
Top