Issue with poor Win 10 CIFS upload to share

Status
Not open for further replies.

dtom10

Explorer
Joined
Oct 16, 2014
Messages
81
Hello everyone,

I'm facing a rather annoying performance issue that prevents me to actually do something from my Win10 workstation to the FreeNAS server. I tried looking for solutions on the interwebs but came up with nothing that works.

Here's the background:

Win 10 workstation with on-board Realtek Gigabit adapter, as per internet info, large send offload has been disabled but performance is still in the woods looking for bear crap.

To eliminate factors I've created a Fedora 24 boot USB stick and discovered that copying the hiberfil.sys file from the NTFS system drive which in my case has 13Gb to FreeNAS using NFS and CIFS resulted in roughly the same figures, ~107Mb/s

At this point I'm kind of ruling out any bad settings on my part while setting up shares on FreeNAS.

Here are the numbers dumped from the Fedora test:


[root@localhost live]# dd if=/dev/zero of=/mnt/san/testfile bs=100k count=100000
100000+0 records in
100000+0 records out
10240000000 bytes (10 GB, 9.5 GiB) copied, 88.4941 s, 116 MB/s


/mnt/san is the NFS share

mount.nfs 192.168.0.200:/mnt/zpool/nasware/san /mnt/san/

Then I mounted the NTFS system drive:


[root@localhost live]# ntfs-3g /dev/sde4 /mnt/ntfs/

[root@localhost live]# ll -h /mnt/ntfs/
total 18G
[...]
-rwxrwxrwx. 1 root root 13G Nov 17 14:50 hiberfil.sys
-rwxrwxrwx. 1 root root 4.8G Nov 17 14:50 pagefile.sys


And timed the file copy...


[root@localhost live]# time cp /mnt/ntfs/hiberfil.sys /mnt/san/

real 2m2.028s
user 0m0.023s
sys 0m5.527s


NFS seems to deliver, now for CIFS to the same dataset on FreeNAS:


[root@localhost live]# mount.cifs //192.168.0.200/san /mnt/cifs/ -o user=backup
Password for backup@//192.168.0.200/san: **********************************************************************************
[root@localhost live]# ll -h /mnt/cifs/
total 377M
drwxrwxrwx. 2 1001 1001 0 Nov 17 13:28 admsrv
-rwxr-xr-x. 1 1001 1001 13G Nov 17 15:54 hiberfil.sys
-rwxr-xr-x. 1 1001 1001 4.8G Nov 17 15:52 pagefile.sys
drwxr-xr-x. 2 1001 1001 0 Nov 17 13:27 test
-rw-r--r--. 1 1001 1001 628K Nov 17 15:45 testfile
drwxrwxrwx. 2 1001 1001 0 Nov 17 13:28 test-ks


Deleted the file...


[root@localhost live]# rm -rf /mnt/cifs/hiberfil.sys


This is the timed test of the same file copy using CIFS


[root@localhost live]# time cp /mnt/ntfs/hiberfil.sys /mnt/cifs/

real 2m1.100s
user 0m0.019s
sys 0m7.947s


Now what happens when I try to upload files to the CIFS share in Win10 is attached in the screenshot. It starts off ok then it drops off to either 10Mbit speeds or lower.

I'm trying to find out what might be the issue here or how to investigate further since I don't have a clue why uploads using CIFS would be crippled in Win10

The network between the workstation and FreeNAS passes through a Netgear managed switch used to LAGG the two Broadcom gigabit NICs on my HP Gen8 Microserver. Besides the LAGG there are no other settings to traffic shape or prioritise traffic.

Any help is appreciated.
 

Attachments

  • freenas_issue.PNG
    57.4 KB · Views: 202
  • freenas_issue2.PNG
    66.7 KB · Views: 208
  • freenas_issue3.PNG
    306 KB · Views: 204
  • freenas_issue4.PNG
    345.4 KB · Views: 208

dtom10

Explorer
Joined
Oct 16, 2014
Messages
81
Were you able to figure this out?

I forgot about this thread :)

In my case it was gzip compression that caused this. For some reason I enabled gzip compression on the dataset some time ago and forgot about that. Looking on the internet for some pointers on how to better test ZFS performance to troubleshoot a potential performance issue with the array I found this site https://calomel.org/zfs_raid_speed_capacity.html

It was then I thought of checking the ZFS compression setting. Now I get an average of 80-90Mb/s from my desktop to the CIFS share.

The file I chose to test performance was smaller than my arc cache. I have 16Gb of memory so as long I was hitting that, the compression was not actually done.
 
Status
Not open for further replies.
Top