OpenVPN on TrueNAS Scale slow?

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
Hello good people!

I recently set up an OpenVPN configuration in Truenas Scale to access my pools from my other location, but transferring files through the VPN is painfully slow (~3.3MB/s). I am using the UDP protocol, SHA256 authentication, AES-256 cipher, and LZ4 compression. I've tried several different combinations of all of these settings, as well as utilizing different ports, and it still caps out at ~3.3MB/s. The internet speeds at both locations are gigabit, up and down. I also don't believe I'm hardware limited, as CPU usage barely changes when transferring files, and about half of my 40 threads sit idle most of the time.

Before I set up the VPN, I was using Filezilla with SFTP, which allowed me to do 10 transfers simultaneously. However, each transfer was limited to the same ~3.3MB/s.

I don't know enough about this to begin to know where to look to solve this issue, so any and all help is greatly appreciated!
I'm starting to think my ISP is doing some funny business...
 

Attachments

  • 3.3.png
    3.3.png
    2.3 KB · Views: 144
  • 3.3fz.png
    3.3fz.png
    3.8 KB · Views: 136

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
How are you transferring the files?

If the file transfer method doesn't keep the queues full with many outstanding reads/writes, then there will be a latency based limit on throughput.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What do speed test websites say?

I notice your profile lists Nebraska. Are you on a wireless ISP? Even in small towns not on wireless, bandwidth is sometimes constrained. Unfortunately, the 10Gbps of Internet that I can get for $800 in major metro areas is a lot more expensive when hauled over an OC-192 from Denver to North Platte or Kearney, and then hauled on smaller circuits to your local town. I don't know the actual architecture out there, I just know the UP rail line, along which fiber lies.


So what often happens is that some small rural ISP discovers that they can't provide "unlimited" service without going bankrupt. I mention this specifically because the number YOU gave, which works out to about 26 Mbps, happens to be just north of the Netflix 4K bandwidth stream of 25Mbps.


It wouldn't shock me of 10Gbps of delivered bandwidth out in the sticks was $30K/month. So the question is what's the oversubscription rate like. If you have multiple tiers of service, like a 10Mbps "basic" and 100Mbps "extreme" service, the problem is that the people buying the "extreme" service nevertheless tend to be the heavy bandwidth users, so if 25% of them are chewing up 50Mbps each (two HD Netflix streams) during prime time, that means you can only get about 800 subscribers on that 10Gbps circuit, and the bandwidth alone is costing you $38/subscriber.

Now, I don't know what your ISP's actual situation is, but it may just be a practical issue as a result from the need to overcommit to sell a viable service. I come from the service provider industry and we had to work out all these factors in the early days.
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
How are you transferring the files?

If the file transfer method doesn't keep the queues full with many outstanding reads/writes, then there will be a latency based limit on throughput.
I'm using an SMB share to connect to my Windows and Mac machines, each using their respective file explorers. Occasionally I will use Filezilla with SFTP on the Windows machine if I have a lot of files to transfer. I usually transfer multi-gigabyte files.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You might want to think about MTU... a VPN is likely reducing your usual 1500 to something more like 1492 and if your client and server are trying to send TCP inside UDP packets that are too small, you'll be doing a lot of re-sending.

Just a thing to look into, maybe nothing.
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
What do speed test websites say?

I notice your profile lists Nebraska. Are you on a wireless ISP? Even in small towns not on wireless, bandwidth is sometimes constrained. Unfortunately, the 10Gbps of Internet that I can get for $800 in major metro areas is a lot more expensive when hauled over an OC-192 from Denver to North Platte or Kearney, and then hauled on smaller circuits to your local town. I don't know the actual architecture out there, I just know the UP rail line, along which fiber lies.


So what often happens is that some small rural ISP discovers that they can't provide "unlimited" service without going bankrupt. I mention this specifically because the number YOU gave, which works out to about 26 Mbps, happens to be just north of the Netflix 4K bandwidth stream of 25Mbps.


It wouldn't shock me of 10Gbps of delivered bandwidth out in the sticks was $30K/month. So the question is what's the oversubscription rate like. If you have multiple tiers of service, like a 10Mbps "basic" and 100Mbps "extreme" service, the problem is that the people buying the "extreme" service nevertheless tend to be the heavy bandwidth users, so if 25% of them are chewing up 50Mbps each (two HD Netflix streams) during prime time, that means you can only get about 800 subscribers on that 10Gbps circuit, and the bandwidth alone is costing you $38/subscriber.

Now, I don't know what your ISP's actual situation is, but it may just be a practical issue as a result from the need to overcommit to sell a viable service. I come from the service provider industry and we had to work out all these factors in the early days.
My ISP is a subsidiary of Nelnet, and hooks most of our town up with fiber. Both locations use the same ISP, and many people in our town and across our state (including both of my locations) have gigabit upload and download. Personally, I've never experienced bandwidth limitations, and our ISP tends to be considered the best state-wide, but then again, I don't know enough about the small details to know for sure.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I think your set-up lacks parallelism to increase bandwidth. You should try to transfer multiple files in parallel or find software that doesn't just read 1 block, write 1 block, ack and then redo. Any single file might be limited by latency to 3.3 MB/s.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't buy @morganL 's idea. Latency in a regional area network isn't likely to be that high, and while increasing buffers could help if you were seeing variable transmission, the fact that you're seeing a flat number is strongly suggestive of a per-flow rate limit. The other options would be that you are getting capped out by CPU performance of SSH or OpenVPN, but that's just not that likely, and it is unlikely to give a super-consistent number.

You have already somewhat proven this by testing with multiple simultaneous SCP sessions and arriving at the same number. And I think you already suspected it.

Do you have the capability to run some iperf tests?

If you download ISO's from Denver, what sorts of speeds do you get? How about from the Bay Area? You may have to "shop around" a little bit to see if you can find some high speed download sites. But this can help to dispel the "latency" idea.
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
I think your set-up lacks parallelism to increase bandwidth. You should try to transfer multiple files in parallel or find software that doesn't just read 1 block, write 1 block, ack and then redo. Any single file might be limited by latency to 3.3 MB/s.
That's what I'm beginning to think, which is why I'm considering going back to my filezilla setup. I just ran an Iperf test, and I was consistently getting 68MB/s to the server. Is there any way to speed these up?
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
I don't buy @morganL 's idea. Latency in a regional area network isn't likely to be that high, and while increasing buffers could help if you were seeing variable transmission, the fact that you're seeing a flat number is strongly suggestive of a per-flow rate limit. The other options would be that you are getting capped out by CPU performance of SSH or OpenVPN, but that's just not that likely, and it is unlikely to give a super-consistent number.

You have already somewhat proven this by testing with multiple simultaneous SCP sessions and arriving at the same number. And I think you already suspected it.

Do you have the capability to run some iperf tests?

If you download ISO's from Denver, what sorts of speeds do you get? How about from the Bay Area? You may have to "shop around" a little bit to see if you can find some high speed download sites. But this can help to dispel the "latency" idea.
I just ran a couple right as you commented! Only getting 3.3MB/s to the server.
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
That's what I'm beginning to think, which is why I'm considering going back to my filezilla setup. I just ran an Iperf test, and I was consistently getting 68MB/s to the server. Is there any way to speed these up?
I misread the info spitout... it said 68MB transferred total.. my bad. Stil 3.3MB/s
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
If iperf has the same limit, its a networking issue.. not what I suspected
Its capping at 20MBit/s ... maybe the ISP has a rate limiter?
However, weren't you saying that filezilla allowed you 3.3MB/s per file... with 10 files?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Both FreeBSD and Linux are natively tuned for reasonable 1GbE LAN performance, and I'm guessing that you should still be seeing reasonable performance in a regional network, unless it's being mucked with. Can you run a ping test from host to host to see what the RTT is like, and whether there is any jitter? My other question is whether this is a directional effect of some sort, such as an upstream per-stream limitation, which is why it would be really useful for you to run some performance tests against nearby and more distant archive/mirror sites. If we find out that you can download at 10MBytes/sec from Denver but cross-region traffic is showing as limited, it is reasonable to assume an ingress limit exists, and given that you've already shown parallel sessions each experience a limit, it would be a per-TCP-stream limit. So running the tests I've suggested is important to give us clues here.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
However, weren't you saying that filezilla allowed you 3.3MB/s per file... with 10 files?

Yes, interacting with end users on these sorts of issues is sort of an exercise in reading as much out of the things that they're telling you, combining that with an understanding of ISP industry practices. Those of us who've spent significant time doing high-bandwidth content delivery to end users, such as Usenet, have run into issues at many different layers, and my first guess here was that there may be a stealth per-stream upload speed restriction, most likely enforced in the customer CPE, but that sort of thing is mostly popular on HFC networks, and usually isn't such an issue on fiber networks. Unlike HFC which typically maintain insane asymmetry ratios of 20:1+, GPON fiber networks tend to be symmetric, or, at least low-ratio asymmetric. This means that CPE typically doesn't need to implement upstream rate limiting, and it is difficult to implement per-flow rate limiting deeper in the network due to the speeds. Further, eyeball networks typically have egress bandwidth to burn, so this all got very interesting once @Zepherian04 indicated this was a fiber ISP of some sort.

maybe the ISP has a rate limiter?

Yeah, I think this was obvious from the first message. But it's a matter of understanding what and why, which requires a little work here to characterize the behaviours, and/or maybe collaring someone at the ISP to cough up an explanation. Since I like to read tea leaves and solve puzzles, and I've got the experience of both the ISP and Usenet worlds, I'm comfortable at least asking pointed questions to tease out helpful bits of information.

Adding more mystery to the equation, I did a little research and found that "Nelnet' is probably Allo Communications, which is AS15108 on the global Internet. I asked our BGP route reflector about that...

Code:

User Access Verification

Password:
rr0# show ip bgp regexp 15108$
BGP table version is 52008997, local router ID is 206.55.64.57, vrf id 0
Default local pref 100, local AS 14536
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*>i72.46.48.0/20    72.52.108.229                 100      0 6939 15108 i
*>i72.46.62.0/24    72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i72.46.63.0/24    72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i
*>i104.192.104.0/22 72.52.108.229                 100      0 6939 15108 i
*>i104.218.64.0/21  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i135.84.220.0/22  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.210.4.0/22   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.219.192.0/22 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.246.148.0/22 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.250.116.0/22 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i167.248.0.0/18   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i167.248.64.0/18  72.52.108.229                 100      0 6939 15108 i
*>i192.69.112.0/24  72.52.108.229                 100      0 6939 15108 i
*>i192.77.188.0/22  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i192.195.111.0/24 72.52.108.229                 100      0 6939 15108 i
*>i198.178.28.0/22  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i
*>i198.183.0.0/23   72.52.108.229                 100      0 6939 15108 i
*>i198.183.3.0/24   72.52.108.229                 100      0 6939 15108 i
*>i198.183.4.0/22   72.52.108.229                 100      0 6939 15108 i
*>i199.123.0.0/22   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i199.187.114.0/23 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i204.145.186.0/24 72.52.108.229                 100      0 6939 15108 i
*>i206.222.192.0/20 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i
*>i206.222.208.0/20 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
  i209.50.0.0/20    72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i209.50.16.0/20   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i216.75.112.0/20  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i

Displayed  27 routes and 905169 total paths
rr0# exit
Connection closed by foreign host.


I also looked from another vantage point. It looks like you get all your transit from 174 (Cogent) and 6939 (Hurricane) so upstream bandwidth shouldn't be a problem. Wow look at that AS padding though. Is that nine elements? Geez.

Lots of this seems to be going through Cogent Omaha. No real clues, just interesting info.
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
Both FreeBSD and Linux are natively tuned for reasonable 1GbE LAN performance, and I'm guessing that you should still be seeing reasonable performance in a regional network, unless it's being mucked with. Can you run a ping test from host to host to see what the RTT is like, and whether there is any jitter? My other question is whether this is a directional effect of some sort, such as an upstream per-stream limitation, which is why it would be really useful for you to run some performance tests against nearby and more distant archive/mirror sites. If we find out that you can download at 10MBytes/sec from Denver but cross-region traffic is showing as limited, it is reasonable to assume an ingress limit exists, and given that you've already shown parallel sessions each experience a limit, it would be a per-TCP-stream limit. So running the tests I've suggested is important to give us clues here.
I've done a few things in the past two hours or so, so here goes!

Just to clarify, I have an OpenVPN tunnel straight to my server, using the built in OpenVPN server in TrueNAS. The only traffic that goes over this network is straight to TrueNAS, I cannot access any of the rest of the local network through it.

Checked my connection speed at both locations to some servers around the country, and they both had about the same results. Denver (~250 miles, 100Mb/s up/down) and Los Angeles (~1250 miles, 60Mb/s up, 30Mb/s down).

I then ran a few more iperf/other tests between my two locations through the VPN, and I'm getting about 17ms ping, 3.3MB/s, both up and down. No jitter.

I decided to run as many separate instances of Filezilla running 10 transfers each as it took to saturate my connection (not over the VPN, using SFTP), and I was able to saturate my gigabit connection (~750Mb/s real world, about 95MB/s) showing that the server can handle the input. Each individual connection was still limited to 3.3MB/s however.

I then decided to run the same Filezilla barrage through the VPN, and funnily enough, I ended up getting the same result as over the open internet, albeit 10% slower, because of the VPN (~680Mb/s real world, about 85MB/s) and server CPU usage shot up about 4% according to the TrueNAS reporting, showing that some actual work was being done.

Once all of that was over, I went back over to the other location and ran the same tests/Filezilla barages on the local network. I ended up with expected results on the local network, with ~65MB/s being the result (the limit of my laptop wifi it seems). When I ran the same tests again on the local network, this time connecting through the OpenVPN profile, the rate was 45MB/s, much higher than the 3.3MB/s I was expecting.

All of this together leads me to believe that my ISP somehow has per-stream limits on transfers, as I am able to fully saturate the VPN connection with multiple streams. Let me know if I did anything wrong!
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
Yes, interacting with end users on these sorts of issues is sort of an exercise in reading as much out of the things that they're telling you, combining that with an understanding of ISP industry practices. Those of us who've spent significant time doing high-bandwidth content delivery to end users, such as Usenet, have run into issues at many different layers, and my first guess here was that there may be a stealth per-stream upload speed restriction, most likely enforced in the customer CPE, but that sort of thing is mostly popular on HFC networks, and usually isn't such an issue on fiber networks. Unlike HFC which typically maintain insane asymmetry ratios of 20:1+, GPON fiber networks tend to be symmetric, or, at least low-ratio asymmetric. This means that CPE typically doesn't need to implement upstream rate limiting, and it is difficult to implement per-flow rate limiting deeper in the network due to the speeds. Further, eyeball networks typically have egress bandwidth to burn, so this all got very interesting once @Zepherian04 indicated this was a fiber ISP of some sort.



Yeah, I think this was obvious from the first message. But it's a matter of understanding what and why, which requires a little work here to characterize the behaviours, and/or maybe collaring someone at the ISP to cough up an explanation. Since I like to read tea leaves and solve puzzles, and I've got the experience of both the ISP and Usenet worlds, I'm comfortable at least asking pointed questions to tease out helpful bits of information.

Adding more mystery to the equation, I did a little research and found that "Nelnet' is probably Allo Communications, which is AS15108 on the global Internet. I asked our BGP route reflector about that...

Code:

User Access Verification

Password:
rr0# show ip bgp regexp 15108$
BGP table version is 52008997, local router ID is 206.55.64.57, vrf id 0
Default local pref 100, local AS 14536
Status codes:  s suppressed, d damped, h history, * valid, > best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*>i72.46.48.0/20    72.52.108.229                 100      0 6939 15108 i
*>i72.46.62.0/24    72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i72.46.63.0/24    72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i
*>i104.192.104.0/22 72.52.108.229                 100      0 6939 15108 i
*>i104.218.64.0/21  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i135.84.220.0/22  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.210.4.0/22   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.219.192.0/22 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.246.148.0/22 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i162.250.116.0/22 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i167.248.0.0/18   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i167.248.64.0/18  72.52.108.229                 100      0 6939 15108 i
*>i192.69.112.0/24  72.52.108.229                 100      0 6939 15108 i
*>i192.77.188.0/22  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i192.195.111.0/24 72.52.108.229                 100      0 6939 15108 i
*>i198.178.28.0/22  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i
*>i198.183.0.0/23   72.52.108.229                 100      0 6939 15108 i
*>i198.183.3.0/24   72.52.108.229                 100      0 6939 15108 i
*>i198.183.4.0/22   72.52.108.229                 100      0 6939 15108 i
*>i199.123.0.0/22   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i199.187.114.0/23 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i204.145.186.0/24 72.52.108.229                 100      0 6939 15108 i
*>i206.222.192.0/20 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i
*>i206.222.208.0/20 72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
  i209.50.0.0/20    72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i209.50.16.0/20   72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 15108 15108 15108 15108 i
*>i216.75.112.0/20  72.52.108.229                 100      0 6939 15108 15108 15108 15108 15108 i

Displayed  27 routes and 905169 total paths
rr0# exit
Connection closed by foreign host.


I also looked from another vantage point. It looks like you get all your transit from 174 (Cogent) and 6939 (Hurricane) so upstream bandwidth shouldn't be a problem. Wow look at that AS padding though. Is that nine elements? Geez.

Lots of this seems to be going through Cogent Omaha. No real clues, just interesting info.
This is some interesting info! And yes, I have Allo Communications at both locations. If you read my reply to jgreco there is more information on things I have done there. I'm tending to agree with you that it is per-stream capped.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Let me know if I did anything wrong!

So with that in mind, let me request some clarifications:

Checked my connection speed at both locations to some servers around the country,

Okay, but was this to speedtest sites? Because one of the dirty little secrets of the ISP business is we know the IP addresses of all the speedtest sites and if we want to cheat, we exempt them.

Testing upstream speeds generally requires that you have a machine somewhere on a high speed pipe that you can firehose at. That's a heavy lift and no good general purpose solution exists.

Downloads, though, are usually easier to find and more honest. Try looking at the list of Debian or Ubuntu download sites and seeing if you can find one that seems close to you.

Filezilla barrage through the VPN, and funnily enough, I ended up getting the same result as over the open internet, albeit 10% slower, because of the VPN (~680Mb/s real world, about 85MB/s)

And you mean you connected from your laptop using OpenVPN over the local ethernet to the local server and got the 85MB/s, correct? I just want to make sure we've got a common set of facts.

If that's the case, I do have one more question for you. Is the OpenVPN setup using TCP or UDP transport? I fear the answer is "UDP", because that's usually the default for OpenVPN. But, if it happens that you're using TCP, try switching TO UDP. Session oriented rate limiters occasionally make the mistake of limiting only TCP packets.
 

Zepherian04

Dabbler
Joined
Apr 1, 2021
Messages
10
Downloads, though, are usually easier to find and more honest. Try looking at the list of Debian or Ubuntu download sites and seeing if you can find one that seems close to you.
I'm not entirely certain how to do that, but I know for a fact I typically get ~50MB/s when I download most things, permitting the server on the other side allows it. I did use the speedtest sites initially, so I went ahead and downloaded some ISO's from a few places (ubuntu, debian, raspberry pi) and they seemed to download from 30-50MB/s (~200-400Mb/s). I also read up on how to use speedtest-cli, did some tests, and it had similar results to my initial post.

And you mean you connected from your laptop using OpenVPN over the local ethernet to the local server and got the 85MB/s, correct? I just want to make sure we've got a common set of facts.
No actually, in this example I was connecting from my desktop (ethernet wired) to the server from the other site. Using filezillla SFTP, no VPN, is how I got that result, which maxed out the upload at my site, as well as the download on the other end (~750Mb/s). I ran the same test from the other site, this time through the VPN, giving me this result:
I then decided to run the same Filezilla barrage through the VPN, and funnily enough, I ended up getting the same result as over the open internet, albeit 10% slower, because of the VPN (~680Mb/s real world, about 85MB/s) and server CPU usage shot up about 4% according to the TrueNAS reporting, showing that some actual work was being done.
Each individual stream was limited to 3.3MB/s, but altogether I was able to saturate both locations connections.


Once all of that was over, I went back over to the other location and ran the same tests/Filezilla barages on the local network. I ended up with expected results on the local network, with ~65MB/s being the result (the limit of my laptop wifi it seems). When I ran the same tests again on the local network, this time connecting through the OpenVPN profile, the rate was 45MB/s, much higher than the 3.3MB/s I was expecting.
I then went to the other site and ran the same tests that I ran from the other location, this time on the local network. This time, the results were as to be expected, though limited by wifi speeds (Thats 45MB/s per stream). This, to me, eliminated the possibility of it being my hardware (server) related.

If that's the case, I do have one more question for you. Is the OpenVPN setup using TCP or UDP transport? I fear the answer is "UDP", because that's usually the default for OpenVPN. But, if it happens that you're using TCP, try switching TO UDP. Session oriented rate limiters occasionally make the mistake of limiting only TCP packets.
I have the VPN set to the default (UDP) and I have tried a few different ports with the same result. Also, I have my Filezilla SFTP protocol running through TDP, with the same 3.3MB/s per stream limit.

Let me know if you have any other ideas/questions!
 
Top