SSH/SCP file transfer speed limit?

Status
Not open for further replies.

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
So I have been recently working on a friend's server and have been transferring a large amount of files back and forth. To increase the speed at which this gets done I have connected his server to my switch via a 10Gb module as mine is already connected via 10Gb.
While transferring the files I noticed that each scp transfer was peaking usually about 118MB/s to 138MB/s so in an effort to expedite most of the folders I started other transfers, about 5 in total, each was peaking at about the same.
So it seems I am unable to transfer faster than about 1.2Gb/s per thread
It is my understanding there is a newer scp that allows multi-threading but I don't know if this is that version.

My question is, what is the limiting factor, why am I stuck at 1.2Gb/s and might I be able to correct this?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Since SSH/SCP is encrypting everything, you might be running into a CPU core speed limit. I'm assuming an iperf test confirms you get full 10Gb/s throughput and your pools can sustain something close to that level of speed.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Yea, that was my thought too but I guess there is another version of scp that allows for multi-threading. Funny thing was the CPU graphs only showed 60% utilization with 5 sessions running at once on a 6 core w/HT, the other machine has 8 cores and showed about 40% utilization. Two sessions were transferring at about 1Gb, the others were transferring smaller files so the speeds were all over the place but the graphs on each machine showed about 3Gb/s
So, yea, in short I also thought it was a CPU limitation.
And I have run iperf before and it's given me funny results. Ran it on a machine once and it said the machine performed like crap, put it in the real world and it ran great. So I don't really like iperf, and it's been a while so I've forgotten the command line to use to run it, lol
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Each SSH was probably using 100% of a core.

5 threads at 100% of 12 logical cores is 40% usage. Plus 20% other.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
So I have been recently working on a friend's server and have been transferring a large amount of files back and forth. To increase the speed at which this gets done I have connected his server to my switch via a 10Gb module as mine is already connected via 10Gb.
While transferring the files I noticed that each scp transfer was peaking usually about 118MB/s to 138MB/s so in an effort to expedite most of the folders I started other transfers, about 5 in total, each was peaking at about the same.
So it seems I am unable to transfer faster than about 1.2Gb/s per thread
It is my understanding there is a newer scp that allows multi-threading but I don't know if this is that version.

My question is, what is the limiting factor, why am I stuck at 1.2Gb/s and might I be able to correct this?
Daft question, but are you using WinSCP or anything slightly related to PuTTY? I've found in most scenarios that using anything based on PuTTY will result in slow transfers. If you're using OpenSSH tools and you're still getting low-ish speeds, then yes, you're likely CPU bound.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
@Visseroth, if you're CPU bound, you might try changing the default SSH encryption scheme to something faster. I use arcfour encryption to speed up rsync transfers.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
I didn't think of that, I'll have to try that next time I'm doing file transfers.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Yea, that was my thought too but I guess there is another version of scp that allows for multi-threading. Funny thing was the CPU graphs only showed 60% utilization with 5 sessions running at once on a 6 core w/HT, the other machine has 8 cores and showed about 40% utilization. Two sessions were transferring at about 1Gb, the others were transferring smaller files so the speeds were all over the place but the graphs on each machine showed about 3Gb/s
So, yea, in short I also thought it was a CPU limitation.
And I have run iperf before and it's given me funny results. Ran it on a machine once and it said the machine performed like crap, put it in the real world and it ran great. So I don't really like iperf, and it's been a while so I've forgotten the command line to use to run it, lol
Test with iperf it does not give funny results. Something else caused your problems in that previous environment.

Sent from my Nexus 5X using Tapatalk
 
Status
Not open for further replies.
Top