Very slow SMB read speeds after TrueNAS 12 upgrade

ude6

Dabbler
Joined
Aug 2, 2017
Messages
37
@J-Lo Can you post the variables that autotune amended?

I had to set the following to get SMB to work semi-ok:

Code:
#NETWORK

Atlas% sudo sysctl net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendbuf_max: 2097152 -> 16777216
Atlas% sudo sysctl kern.ipc.maxsockbuf=16777216
kern.ipc.maxsockbuf: 2097152 -> 16777216
Atlas% sudo sysctl net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvbuf_max: 2097152 -> 16777216
Atlas% sudo sysctl net.inet.tcp.recvspace=4194304
net.inet.tcp.recvspace: 65536 -> 4194304
Atlas% sudo sysctl net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.recvbuf_inc: 16384 -> 524288
Atlas% sudo sysctl net.inet.tcp.sendspace=2097152
net.inet.tcp.sendspace: 32768 -> 2097152
Atlas% sudo sysctl net.inet.tcp.sendbuf_inc=32768
net.inet.tcp.sendbuf_inc: 8192 -> 32768
Atlas% sudo sysctl net.route.netisr_maxqlen=2048
net.route.netisr_maxqlen: 256 -> 2048
Atlas% sudo sysctl net.inet.tcp.mssdflt=1460
net.inet.tcp.mssdflt: 536 -> 1460

#AIO
sudo sysctl vfs.zfs.l2arc_noprefetch=0

vfs.aio.target_aio_procs: 4
sudo sysctl vfs.aio.target_aio_procs=16

vfs.aio.max_aio_procs: 32
sudo sysctl vfs.aio.max_aio_procs=128

vfs.aio.max_aio_queue: 1024
sudo sysctl vfs.aio.max_aio_queue=8192

vfs.aio.max_aio_queue_per_proc: 256
sudo sysctl vfs.aio.max_aio_queue_per_proc=1024
 

TallCoolOne

Cadet
Joined
Oct 31, 2017
Messages
8
@ude6 I am already using most of these same values added manually, the only difference I have is net.inet.tcp.mssdflt is set to 1448. Those definitely helped tweak 10Gbe network speeds in my pre-12.0 FreeNAS system, but with 12.0 and above they do not fix the super slow read problem.
 

ude6

Dabbler
Joined
Aug 2, 2017
Messages
37
Yes. I guess I took the values from some older posts here in the forums and they mostly fix the transfer speed problems on my end (I am on 1Gbe only). I was wondering about the the autotune values because I am hesitating to activate it and I have seen some instability on NFS connections for ESXI VMS that I have not experienced with older versions. I got Corrupt redo log errors that I have never experienced before. There is no error or log on the TRUENAS side. After the changed parameters I have yet to experience an error again. So maybe this at least hides the problem...
 

hydrosaure

Cadet
Joined
Dec 15, 2020
Messages
3
Yes, that sysctl restores sequential read speeds, great find.
 

Attachments

  • clipboard.png
    clipboard.png
    84 KB · Views: 396

TallCoolOne

Cadet
Joined
Oct 31, 2017
Messages
8
Hey all,
So the sysctl net.iflib.min_tx_latency=1 solution fixed the slowness for me also, now I'm seeing random and frequent disconnects. I'll bring it up in the thread related to that.
thanks!
 

Mark St.

Dabbler
Joined
Dec 26, 2020
Messages
13
Hiho,

fresh system with 3x Exos X10 10 TB Raid Z1.
Write speed was 300 MB/s, read only about 180 MB/s which seems strange.

i added a single ssd for testing and got 500 MB/s read AND write as expected.

after "sysctl net.iflib.min_tx_latency=1", read AND write are 300 MB/s....should be okay for a 3 disk setup.
 

Windowseat

Cadet
Joined
May 28, 2020
Messages
1
We recently upgraded from FreeNAS to TrueNAS Core 12.0 and then to 12.0-U1 when it was released. We primarily used FreeNAS as a SMB store for our Veeam backups. After migrating to TrueNAS, Veeam backups would fail. Transfer speeds were also slow, maxing out at around 400mbps inbound via ethernet to TrueNAS. Veeam was indicating that it failed to write to the SMB share.

It appears we have "resolved" the issue by enabling System > Advanced > Autotune. This added a bunch of stuff in Tunables. We basically had nothing in tunables before autotune was enabled. (There were a couple of old entries for iohyve that I deleted.) After enabling autotune and rebooting TrueNAS, Veeam appears to be working. Inbound ethernet throughput seems to be back to normal with spikes above 1gbps.

Autotune does not appear to be recommended as a permanent / production solution, but it seems to be working for now.

If anyone else with these issues tries autotune and has success, please post back to this thread as it may help point to the problem.

I had the same issue but also a repeatedly disconnecting Sonos. What worked for me was the; "enabling System > Advanced > Autotune."
Not sure how long I should keep this box checked but I will leave if for a few hours.
 

Oodimdemus

Cadet
Joined
Mar 1, 2021
Messages
2
Just came across this post regarding poor read performance. I experienced exactly the same on a Dell R720XD using the standard quad GigE adapter. A brief summary of hardware:

  • Dell R720XD
  • 128GB RAM
  • TrueNAS-12.0-U6.1 on SSD
  • 8x ST4000VN008 4TB arranged as striped mirror
  • No SLOG, no L2ARC
  • iSCSI configured per best practice documents
  • 2 1GigE ports configured for iSCSI MPIO which each port on dedicated subnet and VLAN
  • VMware ESXi 6.7 iSCSI initators
Network performance testing using iperf3 achieved 800Mbps on each port in receive and send directions. No networking issues observed there at all. Local storage performance using dd and sync=always was as expected. Reading files local also achieved results that met expectations.

Testing from ESXi host via ESXi CLI resulted in terrible performance. Peak throughput for read operations was about 10Mbps on each GigE port for 20Mbps total via iSCSI MPIO.

Setting "System > Advanced > Enable Autotune" followed by a reboot improved reads by a factor of 20. Writes have degraded somewhat, however.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Dell servers such as these typically come with a PERC RAID controller. It is important that this not be used for ZFS. Please see the following two articles:


Which absolutely covers any PERC H710, H710i, etc., type controller in your R720XD. This would need to be pulled and swapped for an HBA.


discusses other things you may need to know.

ESXi host via ESXi CLI resulted in terrible performance. Peak throughput for read operations was about 10Mbps on each GigE port for 20Mbps total via iSCSI MPIO.

Testing things from the ESXi busybox interface can be a bit weird; the interactions between typical UNIX tools and VMFS6 do not always work the way you expect. It's best to test from within a thick provision eager zeroed VM so that you are measuring performance in a manner that's known and expected to work well.
 
Top