10GBe SMB Write Performance is slow

nullane

Cadet
Joined
Nov 2, 2021
Messages
4
Hey all, I've been scouring these forums for the past few weeks but I am not able to find a solution to my issues.

I am experiencing an issue where 10GBe performance is not at the level I would expect.

Server Specs:
  • TrueNAS Core 12. U6 Server.
  • Dell PowerEdge T420 Server - Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (24 threads)
  • 64GB Ram
  • Two different Pools (same issue both pools)
  • Pool1: 4x 12TB WD RED PRO CMR (in mirror config, 2 vdev)
  • Pool2: 2x 256GB Samsung NVMe in stripe (for performance testing)
  • Intel 520 SFP+ (Connected with DAC cable)
  • NO TUNEABLES - From scratch config

Workstation Specs:
  • AMD Ryzen 7 5800x
  • 32GB Ram
  • Samsung 980 Pro 2TB SSD
  • Intel X540-T1 10gbe Nic (connected with Cat6a)
  • Windows 11

Switch: Netgear XS708E

The issue that I am experiencing that SMB writes to either pool starts fast (9-10GBits) then slows down to around 3GBits after roughly 5GBytes of transfer. I am transferring large DVR files from a Samsung 980 Pro NVMe drive, so the source drive is not the problem.

What I have tried:
  • Workstation: Adjusting Flow Control, Recieve and Send Buffers on the Intel X540-T1 10GBe nic.
  • TrueNas: Sync options, always, standard, or disabled, all the same.
  • Doing a test on Pool2 provides a 2.9GByte/sec performance.
  • Iperf is 10gbit in both directions.
This feels like its an issue with the TrueNAS server, almost like its not using the but sending data to TrueNAS VM on a different physical server provided 5GBIT/s+ consistently on SMB. Reading from the NVMe Pool provides me with 10GBit performance, just writing is an issue. I wonder if Windows 11 is throttling? or is there some issue with ZFS cache stopping to unload at 5gbytes?

Any help is appreciated.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
Test with iperf first - in both directions. That will test the network without touching the disks. When you know the network is good then you can start looking elsewhere like disks etc. But the first step is test the network
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,398
 

nullane

Cadet
Joined
Nov 2, 2021
Messages
4
@NugentS

.8 is my worstation, .10 is the server. Getting 9.4Gbits both directions.

.\iperf-2.1.0-rc-win.exe -c 10.10.1.10 -P 8
[ 7] local 10.10.1.8 port 59072 connected with 10.10.1.10 port 5001
[ 4] local 10.10.1.8 port 59074 connected with 10.10.1.10 port 5001
[ 3] local 10.10.1.8 port 59075 connected with 10.10.1.10 port 5001
[ 8] local 10.10.1.8 port 59076 connected with 10.10.1.10 port 5001
[ 1] local 10.10.1.8 port 59079 connected with 10.10.1.10 port 5001
[ 2] local 10.10.1.8 port 59078 connected with 10.10.1.10 port 5001
[ 6] local 10.10.1.8 port 59077 connected with 10.10.1.10 port 5001
[ 5] local 10.10.1.8 port 59073 connected with 10.10.1.10 port 5001
------------------------------------------------------------
Client connecting to 10.10.1.10, TCP port 5001
TCP window size: 512 KByte (default)
------------------------------------------------------------
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 163 MBytes 137 Mbits/sec
[ 6] 0.00-10.00 sec 1.19 GBytes 1.02 Gbits/sec
[ 2] 0.00-10.00 sec 2.74 GBytes 2.35 Gbits/sec
[ 1] 0.00-10.00 sec 2.67 GBytes 2.29 Gbits/sec
[ 8] 0.00-10.00 sec 2.73 GBytes 2.35 Gbits/sec
[ 7] 0.00-10.00 sec 230 MBytes 193 Mbits/sec
[ 4] 0.00-10.00 sec 1.18 GBytes 1.02 Gbits/sec
[ 3] 0.00-10.01 sec 168 MBytes 141 Mbits/sec
[SUM] 0.00-10.01 sec 11.1 GBytes 9.49 Gbits/sec
.\iperf-2.1.0-rc-win.exe -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 1] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 18314
[ 2] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 25799
[ 3] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 47665
[ 5] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 21320
[ 7] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 23212
[ 4] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 11723
[ 8] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 38358
[ 6] local 10.10.1.8 port 5001 connected with 10.10.1.10 port 56612
[ ID] Interval Transfer Bandwidth
[ 2] 0.00-10.01 sec 1.37 GBytes 1.18 Gbits/sec
[ 3] 0.00-10.01 sec 1.38 GBytes 1.18 Gbits/sec
[ 5] 0.00-10.01 sec 2.75 GBytes 2.36 Gbits/sec
[ 7] 0.00-10.01 sec 1.36 GBytes 1.17 Gbits/sec
[ 1] 0.00-10.01 sec 1.36 GBytes 1.17 Gbits/sec
[ 4] 0.00-10.01 sec 939 MBytes 787 Mbits/sec
[ 8] 0.00-10.01 sec 935 MBytes 784 Mbits/sec
[ 6] 0.00-10.01 sec 932 MBytes 781 Mbits/sec
[SUM] 0.00-10.01 sec 11.0 GBytes 9.40 Gbits/sec
 

Morris

Contributor
Joined
Nov 21, 2020
Messages
120
You are filling the turbo write cache on the SSDs. The speed is as expected.
The 12TB WD RED PRO write speed is sequential write speed is 164 MB/s x 2 = 320 MB/s x8 for bits per second 2,624 Mb/s or 2.6 Gb/s.

WD Red Pro drives are energy efficient drives designed for NAS use. If you need more speed, you will need to use lots of smaller drivers of the same type or much faster drives such as HGST (WD) Ultrastar He12 Enterprise Hard Drives which have a sequential write speed of 255 MB/s.
 

nullane

Cadet
Joined
Nov 2, 2021
Messages
4
@Morris My Samsung NVMe SSDs are capable of 1.7GB/s (thats gigabytes) sustained, and I am seeing around 3 Gb/s (thats gigabits). The speeds I am seeing via Truenas are roughly 5x slower than what the drives themselves are capable of.
 

Morris

Contributor
Joined
Nov 21, 2020
Messages
120
@Morris My Samsung NVMe SSDs are capable of 1.7GB/s (thats gigabytes) sustained, and I am seeing around 3 Gb/s (thats gigabits). The speeds I am seeing via Truenas are roughly 5x slower than what the drives themselves are capable of.
You will need to look at the bus layout of both the server and the client you are testing from. I'd look very carefully at the workstation as even the X570 chipset has limited PCIe lanes. What motherboard and chipset as well as the slots you chose will be critical in getting the full performance for your 10-Gb NIC and M.2 SSD.

It is very important to note that both your NICs are PCIe v2.x 5.0 GT/s, x8 Lane when you chose the slot to place it and the width of the slot when other slots are used.
 
Last edited:

Kailee71

Contributor
Joined
Jul 8, 2018
Messages
110
@nullane what you're going through sounds a lot like the standard ZFS thottling going on. I ran into the same issue and found there are tunables, *if* you're using an adequate SLOG device. Have look here to see if this helps.

Kai.
 
Top