How to improve slow read speeds over SMB/CIFS mount?

benze

Dabbler
Joined
Dec 30, 2013
Messages
17
I am hoping that someone can help me improve my read speeds over my network from my TrueNAS installation. I have an SMB share mounted on a linux server which has terrible read performance (15.2MB/s) so I am trying to figure out what I can do to improve the performance of the system.

My install is a TrueNAS Core TrueNAS-12.0-U6.1 installation running with the following configuration:
- 72G RAM
- 4 x 14TB Seagate Exos X16 7200 RPM (ST14000NM001G)
- Dual XEON CPU X5670 @ 2.93GHz (24 Threads)


To test, I ran a read/write baseline on the TrueNAS shell itself to ensure that it wasn't the NAS hardware which was problematic which has decent numbers:

eric@truenas:/mnt/HomeNAS/media/eric $ dd if=/dev/zero of=./test.dat bs=2048k count=50000
50000+0 records in
50000+0 records out
104857600000 bytes transferred in 43.785081 secs (2394824848 bytes/sec)

eric@truenas:/mnt/HomeNAS/media/eric $ dd of=/dev/null if=./test.dat bs=2048k count=50000
50000+0 records in
50000+0 records out
104857600000 bytes transferred in 18.638570 secs (5625839373 bytes/sec)




Then I ran the same read test from my server with the media share mounted with an abysmal result.
[eric@docker1 eric]$ dd of=/dev/null if=./test.dat bs=2048k count=50000
50000+0 records in
50000+0 records out
104857600000 bytes (105 GB) copied, 6876.65 s, 15.2 MB/s



So I am going from 5.6GB/s (local) to 15.2MB/s (over network).


Running iperf3 between the two servers gives me full gigabit transmission speeds:
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 0 sender
[ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
iperf Done.





So I presume it is either how my share is configured or how my share is mounted.

I have one Pool configured for the full space with lz4 compression and then a dataset within the pool configured to inherit the main storage pool compression.

1704217971217.png


I have an SMB share defined as:

1704218520973.png



And my mount in my server defined as a CIFS mount:
/etc/fstab:

//nas.domain.ca/media /mnt/media cifs rw,user,credentials=/etc/fstab.credentials.docker,vers=3.0,_netdev,soft,uid=bittorrent,gid=media,noperm,noacl,file_mode=0775,dir_mode=02775 0 0



Offhand I don't see anything inherently incorrect about this, although I am forcing CIFS v3.


Any suggestions what can I do to improve performance over the wire? I find that 15MB/s is abysmal on a GigE network connection for my needs. I would have expected something in the 100-150MB/s speeds. Or am I expecting too much?

Thanks!

Eric
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
Firstly, "dd" tests do not paint an accurate picture if the file is compressible, since ZFS inline compression is being used.

Secondly, what do you mean by NFS3 vs NFS4? Only SMB (and thus "cifs") is being used here.
 

benze

Dabbler
Joined
Dec 30, 2013
Messages
17
Firstly, "dd" tests do not paint an accurate picture if the file is compressible, since ZFS inline compression is being used.

Fair enough - what would the better test be? To read from an existing media (non compressable) file on the NAS and then the same file over the wire? I can try that and see if I get significantly different results. Using a random .mkv video file:


eric@truenas:/mnt/HomeNAS/media/eric $dd if=./2160p.mkv of=/dev/null bs=2048k count=50000
7960+1 records in
7960+1 records out
16694818378 bytes transferred in 88.967466 secs (187650826 bytes/sec)


[eric@docker1 eric]$ dd if=./2160p.mkv of=/dev/null bs=2048k count=50000
7960+1 records in
7960+1 records out
16694818378 bytes (17 GB) copied, 1224.21 s, 13.6 MB/s



So local read speeds dropped to 187MB/s, but similar read speeds over the network.

Secondly, what do you mean by NFS3 vs NFS4? Only SMB (and thus "cifs") is being used here.
Typo - corrected. I had mistakenly thought I was using NFS3, but indeed, it is ONLY CIFS/SMB that is being used here. I had tried NFS mounts previously but was too problematic for securing it correctly. Please ignore any NFS references.

Anything I can attempt to tune to make improvements somehow? I've been scouring over old posts trying to find pointers but having difficulty identifying what I can do. If network isn't a bottleneck, shouldn't I somehow approach closer to my network threshold (say 80MBs) for read speeds taking into account CIFS + alternate network overhead?


Thanks,

Eric
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
Try to copy a large, noncompressible, sequential file, such as a movie.

Remove any extraneous options in your mount; let the server and client auto-negotiate the highest version.

Try adding the "cache=loose" in your mount options as a test.

What is your vdev layout? Two mirrors, or a single RAIDZ2?
 

benze

Dabbler
Joined
Dec 30, 2013
Messages
17
dding the "cache=loose" in your mount options as a test.
Try to copy a large, noncompressible, sequential file, such as a movie.

Remove any extraneous options in your mount; let the server and client auto-negotiate the highest version.

Try adding the "cache=loose" in your mount options as a test.

What is your vdev layout? Two mirrors, or a single RAIDZ2?
vdev layout is a single RaidZ1:

zpool status
pool: HomeNAS
state: ONLINE
scan: scrub repaired 0B in 1 days 13:42:12 with 0 errors on Mon Dec 25 13:42:14 2023
config:

NAME STATE READ WRITE CKSUM
HomeNAS ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/a116e136-45b7-11ec-9fa8-18a905702548 ONLINE 0 0 0
gptid/a1023d0b-45b7-11ec-9fa8-18a905702548 ONLINE 0 0 0
gptid/a1092e04-45b7-11ec-9fa8-18a905702548 ONLINE 0 0 0
gptid/a0e9601d-45b7-11ec-9fa8-18a905702548 ONLINE 0 0 0



I'm currently testing out removing additional parameters in the mount options. What I have discovered is interesting. If I recreate the mount with the exact same configuration options but to a different mount point, I get an expected 80-90MB/s throughput. If I test on the existing mount point, I still see 15MB/s.

So I can only presume that there are some processes that are reading on the existing mount point which is choking my bandwidth. I tried using `lsof` to identify any process that had open files on my mount and shut them down. Now when I run a `dd` over the mount point, I get ~50MB/s which is significantly better, but still less than a brand new mount point.

Any ideas what I can do to chase down the gremlin(s) which is consuming the bandwidth on the existing mount point?
 
Joined
Oct 22, 2019
Messages
3,641
How's the fragmentation?
Code:
zpool list HomeNAS
 

benze

Dabbler
Joined
Dec 30, 2013
Messages
17
How's the fragmentation?
Code:
zpool list HomeNAS
$ zpool list HomeNAS
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
HomeNAS 50.9T 26.7T 24.3T - - 11% 52% 1.00x ONLINE /mnt



11%. Not sure if that's good or bad, but from a quick scan thru ZFS, it seems as though anything under 70-80% shouldn't affect performance at all.

I've taken the drastic measure of unmounting the CIFS mount, and then remounting it using /etc/fstab entry and I now get closer to 80MB/s transfer speeds. To say that I am confused is an understatement. If there were open file descriptors, then I would have expected to seem them in `lsof`, but given that there was nothing showing, I can't figure out why unmounting and remounting will make a difference.

And addressing the issue via a drastic measure like that without understanding the root cause simply tells me that this problem is destined to reappear in the future as well.

Any suggestions what I can look at on the Linux server to see why the mount gets choked like that?
 
Top