Slow write speed

custom130

Cadet
Joined
May 2, 2023
Messages
4
Just built my system a few days ago.

When I transfer a 10 GByte file to my NAS I only get speeds of

Yes I have read several of the very common "slow write speed" threads here, however my case seems to be somewhat unique. For most of the other cases the solution or resolution seemed to be that the data is written to ZFS cache for the first few seconds and then when that is used up the speed slows down. The reason why that can't be the case for me is that the files I am transferring are about 0.2% of the memory on my system, and I have turned sync off!

System specs:

CPU: 2x intel xeon 2699 v4
RAM: 512GByte ECC
Pool: 2 VDEVs in RAIDZ2, each pool has 6, 18 TB drives.

I have a 10 Gbit/s network card on my NAS and a 10 Gbit/s NIC in my PC (running Ubuntu) and a 10 Gbit/s switch.

Iperf3 gives the expected results of around 10 Gbit/s transfer speed.
Connecting to host 192.168.1.86, port 5201
[ 5] local 192.168.1.88 port 51198 connected to 192.168.1.86 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.00 GBytes 8.61 Gbits/sec 0 1.68 MBytes
[ 5] 1.00-2.00 sec 996 MBytes 8.36 Gbits/sec 0 1.68 MBytes
[ 5] 2.00-3.00 sec 1.01 GBytes 8.68 Gbits/sec 0 1.68 MBytes
[ 5] 3.00-4.00 sec 1.01 GBytes 8.66 Gbits/sec 0 1.68 MBytes
[ 5] 4.00-5.00 sec 1.04 GBytes 8.94 Gbits/sec 0 1.68 MBytes
[ 5] 5.00-6.00 sec 1.03 GBytes 8.84 Gbits/sec 0 1.68 MBytes
[ 5] 6.00-7.00 sec 1.03 GBytes 8.83 Gbits/sec 0 1.68 MBytes
[ 5] 7.00-8.00 sec 1.03 GBytes 8.88 Gbits/sec 0 1.68 MBytes
[ 5] 8.00-9.00 sec 1.03 GBytes 8.84 Gbits/sec 0 1.68 MBytes
[ 5] 9.00-10.00 sec 1.03 GBytes 8.86 Gbits/sec 0 1.68 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 10.2 GBytes 8.75 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 10.2 GBytes 8.75 Gbits/sec receiver


CPU and Memory utilization on the dashboard are minimal in terms of percentage.

While the transfer is going, CPU utilization ranges between 1% and 10%
Network transfer in: 305.77 MiB/s
Memory usage is -----------
Free: 474.0 GiB

ZFS Cache: 12.4 GiB

Services: 25.4 GiB

-------------


I am copying from an ubuntu system with 10 Gbit/s NIC and reading off of a Samsung 970 EVO SSD.
Truenas pool has sync turned off and zstd10 compression (no dedupe) and AES 256 encryption.

What seems to be the issue here?
 

custom130

Cadet
Joined
May 2, 2023
Messages
4
Trying a 200 GByte file now, here are the stats for that transfer

1683098866494.png


1683098898770.png
 

PiWa

Cadet
Joined
May 1, 2023
Messages
6
This is normal. I assume you are using sync=always/standard. You are maxing out the write speed of the drives.
If you set sync=disabled (dangerous, since writes are not verified), you will max out the uplink.
 

custom130

Cadet
Joined
May 2, 2023
Messages
4
This is normal. I assume you are using sync=always/standard. You are maxing out the write speed of the drives.
If you set sync=disabled (dangerous, since writes are not verified), you will max out the uplink.
Ah I forgot to mention I turned off sync too
 

custom130

Cadet
Joined
May 2, 2023
Messages
4
This is normal. I assume you are using sync=always/standard. You are maxing out the write speed of the drives.
If you set sync=disabled (dangerous, since writes are not verified), you will max out the uplink.
Wait no I mentioned that I have sync turned off

"am transferring are about 0.2% of the memory on my system, and I have turned sync off!"
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
I would expect it to be faster too.

What NIC are you using on both sides?

Have you checked drive usage while the transfer is going? What record size is the dataset set to in question? I would agree that it should fill up cache in RAM first, but worth checking the disks just in case it's hitting that.

You said "each pool has 6 18TB." I assume you mean "each vdev has 6 18TB disks" so you have 12 18TB disks total in 2 RAIDZ2 vdevs?

I assume this is via SMB?

Have you experimented with setting MTU to 9000?

Are you able to try this from a workstation running Windows, just in case it's a Linux thing?

I'm not familiar with zstd10 compression as I've always just used lz4. Have you tried with compression off?

What are read speeds like across the network from the pool?

Have you tried directly connecting the workstation to the NAS to bypass the switch just in case?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Just to be clear on what's happening...

ZFS won't just write everything you can throw at it to RAM only until RAM fills up, expecting to later be able to write it to disk.

What's "in the air" (and subject to loss of power in the case of async writes) at any given time is 2 transaction groups... about 10 seconds of data.

If your pool disks can't keep up after that, then the system backs off until they can catch up.
 
Top