Slow Rsync of data from old server to new

Status
Not open for further replies.

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
Hello everyone,
I recently built a new FreeNas server. Needed something with more space so upgraded to a 8 TB x 6 raid-z2 system. I've finally finished with all my disk testing and burn in procedures and tonight started working on copying data over from my old server. The thing I can't figure out is why rsync is running so slow.

Both servers have a 4x 1 Gb LACP connection with a managed switch fully configured for the LACP. Now I suspect each single connection in this configuration would still only at most saturate a single NIC, which is fine, but I'm not even getting that. My speeds on larger files is maxing out around 53 - 56 MB/s. I've tested and verified by copying a file from the old server to my windows desktop, it copies at 110 - 114 MB/s. Then if I copy from my desktop to the new NAS, it again copies at 110 - 114 MB/s. This is saturation speed for a single gigabit NIC. So, why would running rsync on the new server pulling from the old be half the speed?

It doesn't seem to be a read or write speed limitation as far as I can tell. I used dd to test write speeds on the new NAS with files much larger than the ones I'm copying and achieved speeds well above what I'm seeing during transfer.

I saw similar low speed results using scp instead of rsync to copy the files. So this makes me believe it's a network configuration issue, but I'm not sure what it would be?

Thanks

EDIT: Right after posting this I spotted a potential culprit for the speed limitation here. On the old server looking at the process list I see sshd maxing out at 100%. I guess sshd a single threaded application? That would explain the bottleneck here with both rsync and scp, and it wouldn't happen going to windows since that is using samba instead of ssh. On the new server ssh is running at 12% and rsync is around 30%.

If that is the case, is there another method anyone could recommend for copying the data over? At this rate it's going to take over 100 hours just to copy the data over. If I can reduce that a bit, that would be nice lol.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Last time I did what you are doing, I connected all the drives to the same system and did a ZFS send | receive to copy the data, 9TB in 2 hours


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
Unfortunately that isn't really an option for me here. For one, I need the old NAS to remain operational until I switch over because I have stuff running 24 x 7 that accesses information on it. I mean it isn't critical but would be nice. For two, it would require purchasing a sata card just to do the data transfer as I dont have enough sata ports on the motherboard to support another 6 drives from the old nas.

I tried a couple of other things so far today. I did a mount_smbfs to see if I could force it to not use ssh to copy. Using pv to get a progress it did copy faster, but still maxed out around 60 - 64 MB/s. Nothing on either server was over 20% CPU usage during this. Also, using rsync from this samba mount to the local storage on the new server was only 45 - 48 MB/s. So, I have no idea what is still causing the slow down in this situation when samba transfers to windows are always 110 MB/s+. Maybe the freebsd smbfs implementation is just slower, I don't know.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Where is
?
How much data are you moving?
I am doing a transfer at work from one system to another where I have 161 TB of compressed data that would expand out to about 280 TB. The current projection is that the copy will take another eight days, and it has already been running for four days. I connected all 124 drives to the same system to make the copy go faster because it was going to take 28 to 30 days doing it across the network.
Depending on how quickly you want this done, you might be very much better off to purchase a SAS HBA for your FreeNAS system so you can connect all the drives to the newer system. Once you are done with the copy, you can keep the new drives on the SAS controller and have the SATA ports in the system board available for later.

This is the kind of SAS HBA that I use:
https://www.ebay.com/itm/253822062505
and you will need a set of cables to go with it:
https://www.ebay.com/itm/371681252206
 

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
How much data are you moving?

I'm only transferring about 14 TBs. It isn't so much that I can't have it take a few days, just that I was looking for better ways to do it hehe :) I feel like there should be some method out there that can fully saturate my LACP interface. But, I don't see anything readily available. I did find 2 methods to mostly saturate a single NIC just a few minutes ago. If I configure a rsync module on the old server and force it to use the rsync protocol, that will copy at 100 - 105 MB/s. Also plain old FTP is around 105 - 108 MB/s.

I tried running rsync in 2 different screens to see if they would initiate different connections and max out more than one NIC, but it didn't. Seemed to split the load on a single NIC. I only have a limited understanding of how LACP works, so it's probably something to do with that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I feel like there should be some method out there that can fully saturate my LACP interface.
LACP is great for service to many clients, but it just doesn't work the way we might like when trying to transfer between two NAS systems. I tried that time before last when I did a NAS upgrade at home. The build I have now is my eighth, so I have tried almost all the options, even over 10Gb network, and the fastest thing is connecting the two pools in the same system. With the system I have now, I can transfer from the 12 drive NAS1 pool to the 12 drive NAS2 pool at 1000MB/s. I put up a photo once. It only goes that fast because there are two vdevs of six drives each. If I transfer from the NAS1 pool to the four drive Backup pool, it runs at half the speed because there is only one vdev. More vdevs gives you more speed.
 

steven6282

Dabbler
Joined
Jul 22, 2014
Messages
39
Yeah, I think I found the best solution I'm going to get for across the network transfer. I turned off LACP on both servers and specify the address in an rsync command to get it to use a specified NIC. This way I can have 4 rsync commands running simultaneously saturating 4 NICs. Although, I'm only using 3, left one NIC on the old server with the same IP address so that stuff can still access it on a single NIC while I do the copy. It does however require a more manual copy process as I have to break up the copy. But still, doing it this way I should have the 14 TBs transferred in less than 24 hours. Which is faster than I could get a card here and do it on the same system hehe :)

Only thing realistic about trying to get the transfer done soon is I am in SC, so there is a hurricane coming through in a few days. I'd like to have the transfer finished before then, just in case I lose power or something hehe.

I suppose if I really wanted I could write a script to get a file list and automatically round robin the rsync commands to ip addresses so that I don't have to manually copy segments hehe.
 
Last edited:
Status
Not open for further replies.
Top