Hardware:
Server: TrueNAS-12.0-U1.1:
Problem description:
Slow sequential read over NFS. I have created a 200 GB file
However, when I do the same but mounted on the client over NFS (default mount opts) I only get around 290 MB/s:
The highest CPU load during the test is around 7-9 % on TrueNAS dashboard and 20-25% WCPU for nfsd in top.
Problem solving steps:
I have tested the TCP network performance between the machines using iperf3 and see no issues:
I have tried different settings for the pool (no encryption, no metadata vdev) and the default values for tuneables, number of NFS servers and MTU but don't see any significant difference in the performance.
Does anyone have any advice on how to solve this? Or is this performance expected?
Server: TrueNAS-12.0-U1.1:
- Supermicro X11SPH-NCTPF
- Intel Xeon Scalabel Silver 4210R (10C/20T 2.4/2.3 GHz)
- 128 GB RAM
- Storage:
- Data: 6x Seagate Exos X16 ST16000NM002G 16 TB SAS (RAIDZ2)
- Log: 1x Intel Optane SSD 900P 280 GB PCI express
- Spare: 1x Seagate Exos X16 ST16000NM002G 16 TB SAS
- Metadata: 3x INTEL DC S4610 480GB SATA (mirror)
- Hard disk controllers:
- SAS: Onboard 3008
- SATA: C622
- Network cards
- Onboard X722
- Dell PowerEdge T20
- Xeon E3-1225 v3 3.2 GHz
- 16 GB RAM
- Mellanox ConnectX-3
- Pool: RAIDZ2, encrypted (AES-256-GCM), compressed (lz4)
- Network: MTU 9000
- Number of NFS servers: 20
- TrueNAS tuneables:
- kern.ipc.maxsockbuf 8388608
- net.inet.ip.intr_queue_maxlen 2048
- net.inet.tcp.delayed_ack 0
- net.inet.tcp.mssdflt 1448
- net.inet.tcp.recvbuf_inc 524288
- net.inet.tcp.recvbuf_max 16777216
- net.inet.tcp.recvspace 524288
- net.inet.tcp.sendbuf_inc 16384
- net.inet.tcp.sendbuf_max 16777216
- net.inet.tcp.sendspace 524288
- net.route.netisr_maxqlen 2048
Problem description:
Slow sequential read over NFS. I have created a 200 GB file
dd if=/dev/urandom of=test bs=1M count=200k
. When I read it locally on the TrueNAS host i get around 850 MB/s:# dd if=test of=/dev/null bs=1M
214748364800 bytes transferred in 249.877236 secs (859415480 bytes/sec)
However, when I do the same but mounted on the client over NFS (default mount opts) I only get around 290 MB/s:
# dd if=test of=/dev/null bs=1M
214748364800 bytes (215 GB) copied, 740,928 s, 290 MB/s
The highest CPU load during the test is around 7-9 % on TrueNAS dashboard and 20-25% WCPU for nfsd in top.
Problem solving steps:
I have tested the TCP network performance between the machines using iperf3 and see no issues:
# iperf3 -c [TrueNAS IP]
...
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 11.5 GBytes 9.88 Gbits/sec receiver
# iperf3 -c [TrueNAS IP] -R
...
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec 0 sender
[ 4] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec receiver
I have tried different settings for the pool (no encryption, no metadata vdev) and the default values for tuneables, number of NFS servers and MTU but don't see any significant difference in the performance.
Does anyone have any advice on how to solve this? Or is this performance expected?