Very slow speed with CIFS share

Status
Not open for further replies.

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Hello,

I have very slow speeds when copying big files to a CIFS share on my FreeNAS. The speed is around 40MB/seconds, which is very little IMHO.

Some information about my FreeNAS machine: Xeon E3-1231V3, 16GB Samsung ECC, Supermicro X10SL7-F, Sea Sonic SS-500L2U 500W, booting from mirrored SSDs.

I have one RAIDZ2 vdev containing 8x WD60EFRX. Dedup and compression are deactivated.

I have a link aggregation (loadbalance) configured in FreeNAS and have built a trunk in my HP 1810-24G switch for those two ports.

Any ideas why I have that slow speeds? I still have a Synology NAS, also trunked ports, which I can copy with 115MB/seconds to...

Regards,
AMiGAmann
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Hello,

I have very slow speeds when copying big files to a CIFS share on my FreeNAS. The speed is around 40MB/seconds, which is very little IMHO.

Some information about my FreeNAS machine: Xeon E3-1231V3, 16GB Samsung ECC, Supermicro X10SL7-F, Sea Sonic SS-500L2U 500W, booting from mirrored SSDs.

I have one RAIDZ2 vdev containing 8x WD60EFRX. Dedup and compression are deactivated.

I have a link aggregation (loadbalance) configured in FreeNAS and have built a trunk in my HP 1810-24G switch for those two ports.

Any ideas why I have that slow speeds? I still have a Synology NAS, also trunked ports, which I can copy with 115MB/seconds to...

Regards,
AMiGAmann

  • Post /etc/local/smb4.conf file.
  • Run an iperf test (on FreeNAS server type 'iperf -s' on client *nix machine with iperf type 'iperf -c <ip-address>')
  • Perform a test upload of same files using scp or sftp and compare speeds.
  • Try without link aggregation.
Note that a RAIDZ2 vdev will have the write IOPs performance of a single disk.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Here is the content of /etc/local/smb4.conf:
Code:
[global]
  server min protocol = NT1
  server max protocol = SMB3_00
  encrypt passwords = yes
  dns proxy = no
  strict locking = no
  oplocks = yes
  deadtime = 15
  max log size = 51200
  max open files = 469901
  syslog only = yes
  syslog = 1
  load printers = no
  printing = bsd
  printcap name = /dev/null
  disable spoolss = yes
  getwd cache = yes
  guest account = nobody
  map to guest = Bad User
  obey pam restrictions = yes
  directory name cache size = 0
  kernel change notify = no
  panic action = /usr/local/libexec/samba/samba-backtrace
  nsupdate command = /usr/local/bin/samba-nsupdate -g
  ea support = yes
  store dos attributes = yes
  lm announce = yes
  hostname lookups = yes
  unix extensions = no
  acl allow execute always = false
  acl check permissions = true
  dos filemode = yes
  multicast dns register = no
  domain logons = no
  local master = yes
  idmap config *: backend = tdb
  idmap config *: range = 90000001-100000000
  server role = standalone
  netbios name = NAS
  workgroup = NETWORK
  security = user
  pid directory = /var/run/samba
  create mask = 0660
  directory mask = 0770
  client ntlmv2 auth = yes
  dos charset = CP437
  unix charset = UTF-8
  log level = 2
   

[audio]
  path = /mnt/pool/audio
  printable = no
  veto files = /.snapshot/.windows/.mac/.zfs/
  writeable = yes
  browseable = no
  vfs objects = zfs_space zfsacl aio_pthread streams_xattr
  hide dot files = no
  guest ok = no
  nfs4:mode = special
  nfs4:acedup = merge
  nfs4:chown = true
  zfsacl:acesort = dontcare
   

[daten]
  path = /mnt/pool/daten
  printable = no
  veto files = /.snapshot/.windows/.mac/.zfs/
  writeable = yes
  browseable = no
  vfs objects = zfs_space zfsacl aio_pthread streams_xattr
  hide dot files = no
  guest ok = no
  nfs4:mode = special
  nfs4:acedup = merge
  nfs4:chown = true
  zfsacl:acesort = dontcare
   

[video]
  path = /mnt/pool/video
  printable = no
  veto files = /.snapshot/.windows/.mac/.zfs/
  writeable = yes
  browseable = no
  vfs objects = zfs_space zfsacl aio_pthread streams_xattr
  hide dot files = no
  guest ok = no
  nfs4:mode = special
  nfs4:acedup = merge
  nfs4:chown = true
  zfsacl:acesort = dontcare


I cannot run iperf, because I have only Windows clients.

The test upload with WinSCP is even worse. The speed is ~ 15MB/seconds.

I disabled the link aggregation and used only a single gigabit connection. The speed is approximately 60MB/seconds now, strange that it is higher than on the aggregated link. But it is still far away from 115MB/seconds...
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
I was able to run iperf.exe (2.0.5-3) on Windows. I ran it several times and it tells me the bandwidth is between 320 Mbits/sec and 334 Mbits/sec.

This is only about approximately 40 MB/seconds! So this seems to be a networking problem?

I am using a HP 1810-24G switch which tells me the port is running on 1000MbpsFullDuplex. I ran iperf on different clients, so I am quite sure that the cables are not broken.

I never had those network problems with Synology NAS's.

Can anybody tell me how to analyze that problems?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
I'd say a large number of network problems are really cable problems. It's unfortunate that cable certifiers are so expensive. I'd start by taking a known-good CAT5e or higher cable and directly connect client to server, then test CIFS performance again.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Because I have only Windows clients I first wanted to test the network speeds between two windows clients with the same iperf-version. I connected a laptop directly to one of the stationary clients with a CAT6a cable and iperf gave me ~450 Mbits/sec (iperf -c <IP> -w 256k). I think the windows iperf-results seem to be not realistic.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
I just wanted to make sure that its really a networking/share problem and not the harddisks themselves:

Code:
dd if=/dev/random of=test.dat bs=2048k count=25k
25600+0 records in
25600+0 records out
53687091200 bytes transferred in 526.135460 secs (102040435 bytes/sec)

...which is 97,3MB/sec, which is probably okay.

Code:
dd if=test.dat of=/dev/null bs=2048k
25600+0 records in
25600+0 records out
53687091200 bytes transferred in 81.616964 secs (657793288 bytes/sec)

...which is 627,3MB/sec, I don't know why the value is so high, the dataset itself is *not* compressed.

Nevertheless I think it's not the disks, but a network or sharing problem.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
97.3MB is not okay. That test is invalidated though since you used /dev/random. The bottleneck was /dev/random and not your zpool. ;)

IF a valid test from /dev/zero (compression disabled of course) yielded 100MB/sec you'd have a major storage issue and that *would* be the reason for your slow performance.

Your iperf tests are pretty disappointing. You need to get that sorted out though.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
I did the dd-tests again, this time with /dev/zero, as recommended by you. The results are 703,7MB/s (first test, writing) and 857,7MB/s (second test, reading). I did the tests on a dataset which does not use compression.

I don't know why the values are so high. I thought that the transfer rate of todays SATA HDs is limited to approx. 120 MB/s? If that is not the case it should at least be limited by the SATA bus, which is 6GBit/s, which is 768MB/s.

Can anybody explain those high values above?

At least I am now pretty sure that the HDs are not the bottleneck. So I tried using iperf again. This time I used the current Ubuntu 15.10 and booted it from a USB stick. I installed iperf, so that both on client and server iperf 2.0.5 is used.

This time the iperf result was 117,6MB/s which is quite okay I guess. (So the windows version of iperf should *not* be used in combination with FreeNAS!)

So why do I only get a maximum of 60MB/s when transferring data from/to FreeNAS (and can transfer with approx 115MB/s to an older Synology NAS)?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Each disk can do approx 120MB/sec. You have 8 disks. After parity that's about 6 disks worth of throughput you have at your disposal. If you hadn't gotten at least 500MB/sec I would have assumed something was very wrong with your storage subsystem. ;) I already knew the speeds to expect, which is why I said this above:

IF a valid test from /dev/zero (compression disabled of course) yielded 100MB/sec you'd have a major storage issue and that *would* be the reason for your slow performance.

I knew what values you should have gotten, and when you never got those values (because you never had a valid test) I figured you should at least rule that out. ;)

So if Windows iperf is giving weird results, but Ubuntu iperf tests are fine then it sure sounds like Windows has a problem. Did you try transferring files over CIFS from Ubuntu? If Ubuntu gives good speeds but Windows doesn't then you kind of know what the problem is.... Windows. At that point you'd be on your own to figure out why you're getting only 60MB/sec.

I can saturate 1Gb LAN (110+MB/sec) on Windows 7 with no tweaks on the server or desktop, so I know that much better speeds are not only possible but should be expected.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Thanks for the explanation of the disk speeds.

I now tried transferring files over CIFS from Ubuntu, but that is even slower (approx. 20-30MB/s). But this might be because it was booted from USB as a live linux?

Because I can saturate 1Gb LAN with the (4 year old) Synology NAS, which is also using Samba, I am still not sure whether I configured something wrong in FreeNAS. Very strange indeed that you don't have any problems without doing any tweaks.

I used 3 totally different Windows 7 clients (Enterprise, Pro, Home Premium) with different network adapters. I updated the drivers for the network adapters on all systems. All systems have a SSD, which I used (as source/destination) to estimate the speeds. I did not get speeds higher than 60MB/s on any of those systems.

Is there anything more I can check?
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Can I check, if the Hardware is good enough? I doubt that the Xeon E3-1231V3 is a problem, but I am not sure if the 16GB RAM are enough for the 8*6TB RAIDZ2.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I would go with 32GB of RAM minimum for a system with 8x6TB drives in RAIDZ2.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can I somehow test whether the 16GB are too small?

Yep. Add more RAM.

I've got 10x6TB drives in a RAIDZ2 with 32GB of RAM, and I've felt the squeeze of not enough ARC before. In fact, when I upgraded to the 6TB drives I had it in my head that I would be unhappy and would have to go build a newer system that could handle more than 32GB of RAM. Fortunately I haven't had to do that, but I *have* to upgrade before I'll consider any more storage.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Hmm. Well then I will get the additional 16GB and report back.

Is it possible to take the pool temporary offline to avoid accidentally booting FreeNAS with the attached pool and the new memory? I want to run memtest first.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you remove enough disks that a vdev isn't available, the zpool won't mount. You can also detach the zpool and not check the checkbox to wipe the disks.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Hi anodos,

thanks for the information, but I had added those auxiliary parameters already to the CIFS service before.

This resulted in a faster access of the file structure in the windows file manager, but did not increase the maximum speed during file transfer.
 
Status
Not open for further replies.
Top