Poor SMB write performance

Zeff

Dabbler
Joined
Mar 13, 2020
Messages
19
Config:
AMD Ryzen 3
16GB RAM
10G Network
RAIDZ2

I am using 13.0-U6.1 and am having extremely poor SMB write performance.

CleanShot 2024-03-25 at 09.43.36.jpg

Pool is 63% used.
zpool status pool: Aphrodite state: ONLINE scan: scrub repaired 0B in 19:42:46 with 0 errors on Sun Mar 10 20:42:51 2024 config: NAME STATE READ WRITE CKSUM Aphrodite ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/e5fb0df5-72c0-11ea-9935-a85e455542ae.eli ONLINE 0 0 0 gptid/dac2c3ac-6369-11ea-8a1a-a85e455542ae.eli ONLINE 0 0 0 gptid/dc3b8a4b-6369-11ea-8a1a-a85e455542ae.eli ONLINE 0 0 0 gptid/17469227-59f4-11ed-a9c9-6805cac843fa.eli ONLINE 0 0 0 gptid/dc8dfd23-6369-11ea-8a1a-a85e455542ae.eli ONLINE 0 0 0 errors: No known data errors


10G network

iperf results
------------------------------------------------------------ Client connecting to 10.0.1.88, TCP port 5001 TCP window size: 128 KByte (default) ------------------------------------------------------------ [ 1] local 10.0.1.8 port 50884 connected with 10.0.1.88 port 5001 (icwnd/mss/irtt=69/8948/1000) [ ID] Interval Transfer Bandwidth [ 1] 0.00-10.01 sec 11.3 GBytes 9.71 Gbits/sec

ioZone test results

Command line used: iozone -i 0 -t 2 Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. Throughput test with 2 processes Each process writes a 512 kByte file in 4 kByte records Children see throughput for 2 initial writers = 635.39 kB/sec Parent sees throughput for 2 initial writers = 634.80 kB/sec Min throughput per process = 316.45 kB/sec Max throughput per process = 318.94 kB/sec Avg throughput per process = 317.69 kB/sec Min xfer = 508.00 kB Children see throughput for 2 rewriters = 457.20 kB/sec Parent sees throughput for 2 rewriters = 456.95 kB/sec Min throughput per process = 228.60 kB/sec Max throughput per process = 228.60 kB/sec Avg throughput per process = 228.60 kB/sec Min xfer = 512.00 kB
 
Last edited:

somethingweird

Contributor
Joined
Jan 27, 2022
Messages
183
Harddrive Model?
 

Zeff

Dabbler
Joined
Mar 13, 2020
Messages
19
Not surethat this would affect the outcome, but, 6 4TB drives:

2 HGST
4 Seagate Ironwolf
 

somethingweird

Contributor
Joined
Jan 27, 2022
Messages
183
Not surethat this would affect the outcome, but, 6 4TB drives:
To determine if the HD are CMR or SMR. If the HD are SMR then that would be the reason. Otherwise we move on to other possibility.
 

Zeff

Dabbler
Joined
Mar 13, 2020
Messages
19
OK, but we are talking a 33x performance difference between write and read.
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
Not surethat this would affect the outcome, but, 6 4TB drives:

2 HGST
4 Seagate Ironwolf
Some drives use a technology called SMR which works very poorly with zfs. You can check the datasheet.
The high reads could be due to ARC.
Did you use a specific program/OS/samba/smb version?
Also: How full is your pool?
Did you turn on sync writes or atime on the respective dataset?
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
OK, but we are talking a 33x performance difference between write and read.
Reads are probably due to caching (ARC).
You can check if you go to settings -> shell and type "zpool iostat -v 2" while running your benchmark. If you see little to no read operations on the pool then the data is read from RAM which is orders of magnitude faster than your network.
 

Zeff

Dabbler
Joined
Mar 13, 2020
Messages
19
SMB 3
Pool 63% utilised as stated
Tried sync and no sync - made no difference
 

Zeff

Dabbler
Joined
Mar 13, 2020
Messages
19
zpool iostat -v 2:



This was reading a 5.5GB iso file:
 

Attachments

  • CleanShot 2024-03-25 at 16.08.04.jpg
    CleanShot 2024-03-25 at 16.08.04.jpg
    216.6 KB · Views: 129
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
 

Zeff

Dabbler
Joined
Mar 13, 2020
Messages
19
@Stux, I'm not sure what SLOG can do to close the gap. Can you elaborate, as I'm relatively new to TrueNAS.

I just ran a test on a locally connected (USB 3.1 gen 2) IronWolf of the same model, using Blackmagic.
This connection is 10Gbps, and my network is 10G.
Writes: 152 MB/s
My TrueNAS using a RAIDZ2 is 5 times slower.
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
@Stux, I'm not sure what SLOG can do to close the gap. Can you elaborate, as I'm relatively new to TrueNAS.

I just ran a test on a locally connected (USB 3.1 gen 2) IronWolf of the same model, using Blackmagic.
This connection is 10Gbps, and my network is 10G.
Writes: 152 MB/s
My TrueNAS using a RAIDZ2 is 5 times slower.
Can you give use the serial numbers or precise model numbers of the hard drives?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Black magic should be doing a non random writes.

The results only make sense to me if you messed up disabling sync on the dataset when you tested it.

I’d suggest testing with a fake slog to see if your results improve.

And if they do, then you have confirmed the cause and know how to fix.

If not. Well, we’re still where we were.
 

Zeff

Dabbler
Joined
Mar 13, 2020
Messages
19
@Stux unfortunately I do not know how to test with a fake SLOG—can you point me in the right direction please?

@asap2go, I will take the system down later and get precise model numbers.
 
Top