steven6282
Dabbler
- Joined
- Jul 22, 2014
- Messages
- 39
Hello everyone,
I recently built a new FreeNas server. Needed something with more space so upgraded to a 8 TB x 6 raid-z2 system. I've finally finished with all my disk testing and burn in procedures and tonight started working on copying data over from my old server. The thing I can't figure out is why rsync is running so slow.
Both servers have a 4x 1 Gb LACP connection with a managed switch fully configured for the LACP. Now I suspect each single connection in this configuration would still only at most saturate a single NIC, which is fine, but I'm not even getting that. My speeds on larger files is maxing out around 53 - 56 MB/s. I've tested and verified by copying a file from the old server to my windows desktop, it copies at 110 - 114 MB/s. Then if I copy from my desktop to the new NAS, it again copies at 110 - 114 MB/s. This is saturation speed for a single gigabit NIC. So, why would running rsync on the new server pulling from the old be half the speed?
It doesn't seem to be a read or write speed limitation as far as I can tell. I used dd to test write speeds on the new NAS with files much larger than the ones I'm copying and achieved speeds well above what I'm seeing during transfer.
I saw similar low speed results using scp instead of rsync to copy the files. So this makes me believe it's a network configuration issue, but I'm not sure what it would be?
Thanks
EDIT: Right after posting this I spotted a potential culprit for the speed limitation here. On the old server looking at the process list I see sshd maxing out at 100%. I guess sshd a single threaded application? That would explain the bottleneck here with both rsync and scp, and it wouldn't happen going to windows since that is using samba instead of ssh. On the new server ssh is running at 12% and rsync is around 30%.
If that is the case, is there another method anyone could recommend for copying the data over? At this rate it's going to take over 100 hours just to copy the data over. If I can reduce that a bit, that would be nice lol.
I recently built a new FreeNas server. Needed something with more space so upgraded to a 8 TB x 6 raid-z2 system. I've finally finished with all my disk testing and burn in procedures and tonight started working on copying data over from my old server. The thing I can't figure out is why rsync is running so slow.
Both servers have a 4x 1 Gb LACP connection with a managed switch fully configured for the LACP. Now I suspect each single connection in this configuration would still only at most saturate a single NIC, which is fine, but I'm not even getting that. My speeds on larger files is maxing out around 53 - 56 MB/s. I've tested and verified by copying a file from the old server to my windows desktop, it copies at 110 - 114 MB/s. Then if I copy from my desktop to the new NAS, it again copies at 110 - 114 MB/s. This is saturation speed for a single gigabit NIC. So, why would running rsync on the new server pulling from the old be half the speed?
It doesn't seem to be a read or write speed limitation as far as I can tell. I used dd to test write speeds on the new NAS with files much larger than the ones I'm copying and achieved speeds well above what I'm seeing during transfer.
I saw similar low speed results using scp instead of rsync to copy the files. So this makes me believe it's a network configuration issue, but I'm not sure what it would be?
Thanks
EDIT: Right after posting this I spotted a potential culprit for the speed limitation here. On the old server looking at the process list I see sshd maxing out at 100%. I guess sshd a single threaded application? That would explain the bottleneck here with both rsync and scp, and it wouldn't happen going to windows since that is using samba instead of ssh. On the new server ssh is running at 12% and rsync is around 30%.
If that is the case, is there another method anyone could recommend for copying the data over? At this rate it's going to take over 100 hours just to copy the data over. If I can reduce that a bit, that would be nice lol.
Last edited: