Rsync from one TrueNAS to another is very slow

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Hi,

I have two TrueNAS SCALE builds, one running 22.12.3.3 and the other running 23.10.2. I want to copy all data from the 22.12.3.3 to the 23.10.2 build. Both servers are directly connected with a 10 Gb FibreChannel connection (Intel X710-DA2 on both sides). On both sides an alias/ip address is configured directly on that NIC.

I have configured a ssh connection with a ssh keypair on the sender and used that ip address configured on the (destination) 10 Gb NIC. I then configured a rsync task using that ssh connection which is PUSHING the data to the destination TrueNAS.

I can see on the sending system on the 10 Gb NIC that it is sending with 233 Mb/s avg and 407 Mb/s max:
sender.png


On the receiving system on the 10 Gb NIC can be seen that it is receiving with 242 Mb/s avg and 429 Mb/s avg
receiver.png


I have transferred a dataset of 2 TiB. I would have expected that when assuming a transfer rate of 230 Mb/s that this will take (2*1000*1000)/230/60/60 approximately 3 hours. But it actually took nearly 10 hours.

The disks in the destination TrueNAS are Seagate Exos X20 20TB, which should be fast enough to handle those 230 Mb/s.

Am I doing something wrong? Is there anything I can do to improve the speed? Otherwise transfering the bigger datasets will take weeks :(

Best regards,
AMiGAmann
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
The mean is calculated with regard to the window displayed in the reporting section. Since the transfer did not last the whole reporting period the mean is slower. You need to look a the graph, which indicates you transfered around 400 Mbps.
What is your hardware setup? Especially what is the pool storage layout?

You are confusing the units, I'm too lazy to do an actual conversion, but 400 Mebi bit is not Byte. It translates to roughly 50 Megabyte per second, which will then take around 10 hours for your dataset.

Can you show your network configuration in addition to the hardware?
Habe you tested your connection with iperf?

Edit: also add your sync task configuration. One of my machines is serving as an rsync target for a friend, no ssh key setup required or are you using a replication task?
 
Last edited:

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Hi,

thanks for your reply.

Okay, so I transferred with 50 MB/s, which is really slow.

The hardware setup is on both sides: ASRockRack X570 D4U, Ryzen 9 5900X, 128GB RAM, pool with data vdev 1xRAIDZ1 with 8 disks wide.

Network configuration on receiver side:
netw1.png


Network configuration on sender side:
netw2.png


I want to send from 192.168.11.12 to 192.168.11.11. There is a direct physical connection between those interfaces.

I did not test the connection explicitly with iperf. In the past I had a direct phyiscal connection between a client and the old TrueNAS build. I was able to transfer data with >400 MB/s between client and server.

This is how I configured the rsync on the source TrueNAS:
rsync1.png

rsync2.png


The task was manually started.

And this is how the ssh connection is configured:
ssh.png


Best regards,
AMiGAmann
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Mmmh I don't use these rsync interface myself but so far nothing stuck out to me.

Test with iperf please. An 8 wide Raidz1 is too wide for my personal taste, I'd prefer RAIDZ2 then. But other than that you should at least reach single drive writing speed, if not more.

Do a test with iperf please. What does the disk IO reporting show during the transfer?
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Yeah, I know 8 wide Raidz1 is not optimal. But in normal operation mode performance is not a problem at all. Testes offline spare drive is always available and the most important data is always backed up.

Iperf looks good to me.

Server side on receicing system:
iperfs.png


Client side on sending system:
iperfc.png


The Disk I/O on the sending system (8x Seagate Exos X18 18TB) shows very low Reads of 6.0-6.4 MiB (displayed as MegaBytes/s in SCALE 22.12.3.3) and on the receiving system (8x Seagate Exos X20 20TB) Writes of 21.75 MiB (displayed as Mebibytes/s in SCALE 23.10.2, if I understand it correctly).

Very strange...

Best regards,
AMiGAmann
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
What are your bridge members?

I can't make sense of the 22MiB/s writes on the receiving end.

Can you do a replication with zfs send or setup SMB shares and see what the speeds are via that?

Try without compress checked in the rsync screen.

With my limited knowledge I didn't find an obvious flaw, so I'm at wits end here unfortunately
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Both servers are directly connected with a 10 Gb FibreChannel connection (Intel X710-DA2 on both sides).
No. You have Ethernet with fiber optics. That is entirely different from Fiber Channel, which is only for storage.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
What are your bridge members?
The only bridge members are the vlans (vlan10 in br10, vlan30 in br30).

Try without compress checked in the rsync screen.
That was it! I did not think about that but left it ticked (default). Without compression the data is now transferred with approximately 569 MB/s if I am calculating correctly (destination dataset shows used space of 733.15 GiB after 20 minutes, which should be 733.15 * 0,931323 = 682.8 GB * 1000 / 20 / 60 = 569 MB/s).

Thank you for helping me!

No. You have Ethernet with fiber optics. That is entirely different from Fiber Channel, which is only for storage.
Thanks for clarifying.
 
Top