Are there recommendations or specific resources that lay out a general step-by-step approach for troubleshooting poor Multi-gigabit (SMB) throughput performance with SSD media?
I found some scattered information but I am not sure where to continue in my case.
My specific issue:
Slow SMB file transfer speed (using 70GB test file)
Hardware:
TrueNAS Mini X+ with several 500MB/s+ (Read+Write) SSD drives reporting 512byte sector size. 10GbE NIC.
Test-Network:
TrueNAS Mini X+ <----10GbE Link----> 10GbE Switch <-----5GbE Link ----> 5GbE Client
Connection throughput test using iperf -d:
I would be happy with this speed (+400MB/s) but bi-directional SMB file transfers run only at 180-230 MB/sec max:
- Single file transfers of 70GB test file run at around 180-190MB/sec.
- The highest speed of 230MB/sec can be reached with two parallel SMB file transfers.
Notes:
- CPU: 8-core Mini X+ CPU is far from performance limit (short +60% peaks with iperf)
- I started out testing speeds with an all-SSD zpool consisting of several drives. To rule out some raidz level limitations I also ran tests using single drive pools.
Pool level changes I attempted so far:
Compression (Lz4): ON+OFF
Recordsize: 128KiB + 1MiB
atime ON+OFF
(I have not touched ashift - currently at default)
Changing the settings above affected the speeds slightly but I am far of target here, correct? What speed should I be expecting? What could I try next?
I found some scattered information but I am not sure where to continue in my case.
My specific issue:
Slow SMB file transfer speed (using 70GB test file)
Hardware:
TrueNAS Mini X+ with several 500MB/s+ (Read+Write) SSD drives reporting 512byte sector size. 10GbE NIC.
Test-Network:
TrueNAS Mini X+ <----10GbE Link----> 10GbE Switch <-----5GbE Link ----> 5GbE Client
Connection throughput test using iperf -d:
*This speed is in line with what others reported for the 5GbE adapter I am using (Sabrent NT-SS5G)------------------------------------------------------------
Client connecting to 10.10.10.111, TCP port 5001
TCP window size: 1.24 MByte (default)
------------------------------------------------------------
[ 3] local 10.10.10.11 port 39574 connected with 10.10.10.111 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 23.8 GBytes 3.41 Gbits/sec
[ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
I would be happy with this speed (+400MB/s) but bi-directional SMB file transfers run only at 180-230 MB/sec max:
- Single file transfers of 70GB test file run at around 180-190MB/sec.
- The highest speed of 230MB/sec can be reached with two parallel SMB file transfers.
Notes:
- CPU: 8-core Mini X+ CPU is far from performance limit (short +60% peaks with iperf)
- I started out testing speeds with an all-SSD zpool consisting of several drives. To rule out some raidz level limitations I also ran tests using single drive pools.
Pool level changes I attempted so far:
Compression (Lz4): ON+OFF
Recordsize: 128KiB + 1MiB
atime ON+OFF
(I have not touched ashift - currently at default)
Changing the settings above affected the speeds slightly but I am far of target here, correct? What speed should I be expecting? What could I try next?