TCP Buffer Issue?

Status
Not open for further replies.

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
I've been trying to track down an issue I'm having with TCP streams on my 200Mbps WAN connection. After a reboot can get ~90Mbps TX for about 10 seconds at most before it falls down to~30Mbps and stays there (using iperf remotely with a window size of 550k). I tried autotune and then tried adjusting some of the tunables myself based on a few articles I read but haven't been able to solve the issue, but I believe it may be my buffer size. If I run multiple parallel streams the total throughput will saturate my remote 125Mbps connection with 4 streams.

Specs:
-Xeon E5-2683v3
-64GB ECC Ram
-RaidZ2 6x8TB
-Intel I210-AT (in use) or Intel I218LM available

Here you can see it start fine then start dropping down rather quickly, sometimes it stays ~90 for up to 10 seconds but no more than that...

ACQWvIx.png


Here are my current tunables...

zcl5bZO.png


If I run iperf on my macbook from the same remote location it has no issue saturating my remote 125Mbps connection, behind the same router and switch so I doubt it is a firewall issue or anything related to the network equipment (Ubiquiti USG Pro > Unifi switch 16). The latency to the remote location is ~35ms.

One other thing to note, iperf and Plex streams from the Freenas server have the same results showing a limit average of ~30Mbps, BUT if I disable my port forward for Plex and let it fall back to an indirect "relay" connection it will maintain at least 70Mpbs needed to remotely stream a 4k file. I'm not sure how the relay connection is different from the regular TCP streams and I know Plex claims to limit relay connections to 2Mbps but it is working remotely to my Shield Tv with a direct stream, I can verify this through Plex and the fact that HDR media is maintained (which wouldn't be possible if it were transcoding to a smaller file.
 

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
These changes have helped a little bit the issue is still there. I made a mistake in my first test and had nmbclusters as a sysctl, that has been corrected and increase, and that seems to be one of the variables that improved performance, but further increases had no effect. As the test progresses it still slows to a ~40Mbps average, hopefully with some help I can figure out what is holding it back...

cOjQsOx.png


KHlnpqR.png
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
It looks like you're still having the same problem as back here: #11

The fact that your congestion window spikes at 17 seconds implies that some external event (congestion, packet loss, buffering, aliens, etc) disrupted your TCP window scaling.

This article might be helpful:
https://fasterdata.es.net/host-tuning/freebsd/

In particular is the concept that FreeBSD will cache TCP connection data for up to an hour for a given site.. So repeated tests might have artificially limited window sizes. You might also want to try setting inflight.enable to off.

Beyond that, you're going to have to study the behavior of your WAN link vs FreeBSD. Since in the other thread you demonstrated that other OSs have better performance, there must be something happening that's tripping over FreeBSD's congestion and window scaling algorithms.

Edit:

Also, you reference 125Mb/s and 200Mb/s connections in your first post.. Which is where? Are these Wireless links? VPNs? who is your ISP? Is there any other traffic riding these links when you're testing? Is it bursty? Steady streams?
 
Last edited:

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
Also, you reference 125Mb/s and 200Mb/s connections in your first post.. Which is where? Are these Wireless links? VPNs? who is your ISP? Is there any other traffic riding these links when you're testing? Is it bursty? Steady streams?

200Mb up at the server, 125Mb down at the remote location. I've changed to cubic algorithm today and things have improved but it will falls to ~30Mbps regularly and even lower at times (which may be related to the repeated tests as you suggested).
 
Last edited:

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
I changed up the config a little and bumped up the window size to 750k for testing, there is definitely improvement from the average ~30Mbps I started with but it still falls back down from time to time the same as it did before, now it just has higher sustained peaks...

NaL7RR4.png


oLUlH9j.png
 

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
Here's a short test to the same remote location using a macbook as the client instead of the freenas server, even when I increase the time of the test it stays between ~70-90Mpbs the whole time...

ZeW1wtn.png
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Your window size is going to have to be at least 512KB to support >100Mb/s on a 35ms link.

If you look at the Retr column in your iperf output, your performance tanks when the TCP session experiences retries.
The output from the mac does not include the same data, but I would suspect the issue is happening to the MAC as well, but OSX's congestion control and window scaling algorithms are more aggressive at using the entire link than BSD's are by default, so the events have less impact on the TCP stream.

You need to either eliminate whatever is causing the congestion on the link, or tune the congestion control algorithm on the NAS to be more aggressive about re-scaling the window up after a congestion event (retries).
 

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
Your window size is going to have to be at least 512KB to support >100Mb/s on a 35ms link.

If you look at the Retr column in your iperf output, your performance tanks when the TCP session experiences retries.
The output from the mac does not include the same data, but I would suspect the issue is happening to the MAC as well, but OSX's congestion control and window scaling algorithms are more aggressive at using the entire link than BSD's are by default, so the events have less impact on the TCP stream.

You need to either eliminate whatever is causing the congestion on the link, or tune the congestion control algorithm on the NAS to be more aggressive about re-scaling the window up after a congestion event (retries).

I have freenas set with a default window of 560KB and the tests I've run on both machines have been with a window of at least 550KB. I thought the retries might be due in insufficient buffer space and increasing it did have some affect but nothing sustained long enough to alleviate the issue completely. Do you have any suggestions on how to further tune it to be more aggressive after it encounters duplicates acknowledgements?
 

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
What tunable would I use to switch over to westwood+?

cc_westwood+_load loader and net.inet.tcp.cc.algorithm=westwood+?
 

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
I also can't get CHD to load properly. I added it as a loader tunable with cc_chd_load and net.inet.tcp.cc.algorithm=chd, but the system still falls back to newreno.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
I have freenas set with a default window of 560KB and the tests I've run on both machines have been with a window of at least 550KB. I thought the retries might be due in insufficient buffer space and increasing it did have some affect but nothing sustained long enough to alleviate the issue completely. Do you have any suggestions on how to further tune it to be more aggressive after it encounters duplicates acknowledgements?

I'd push the window size up to around 1MB..

It's likely an outside influence.

I'd also try to use htcp. That's the one I've had the most success with.
In fact, you might want to go back over this: https://calomel.org/freebsd_network_tuning.html and explicitly enable the htcp related tunes he recommends.

Beyond that, my next step would be to put wireshark on the link and capture the entire iperf stream on both ends and figure out how the windows are being scaled when iperf reports retries and where the packet loss is occurring.
 

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
Thank you! Definitely getting somewhere now with htcp and those tweaks. I had read through it before but mainly focused on the window sizing and buffer changes. I'm now getting ~60-70Mb on average but its still all over the place with a range of 10-100Mb, I'll have to see whats going on there.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Thank you! Definitely getting somewhere now with htcp and those tweaks. I had read through it before but mainly focused on the window sizing and buffer changes. I'm now getting ~60-70Mb on average but its still all over the place with a range of 10-100Mb, I'll have to see whats going on there.

So the tcp window scaling algorithm is supposed to eventually calculate the "right" window size. There is such a thing as making it too aggressive.

Assuming the network isn't dropping packets due to misconfigured or bad hardware, the most likely explanation here is that there is other traffic on the link. At some points your TCP stream, combined with the other traffic exceeds the bandwidth of your link and your routing device drops excess packets. The whole point of the congestion algorithm is to figure out how large of a TCP window it can use and settle on that, to optimize performance. But, if there are occasional bursts of other traffic, that will fuzz the congestion algorithm's picture of the link. Network layer Quality of Service (QoS) exists to address those issues..
 

jmcguire525

Explorer
Joined
Oct 10, 2017
Messages
94
I believe I have things to a point that it should accomplish my needs. Is there a simple way to apply the tunables so that they are active in my iocage jails? I checked my Plex jail and it is still using newreno despite Freenas using htcp.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
I believe I have things to a point that it should accomplish my needs. Is there a simple way to apply the tunables so that they are active in my iocage jails? I checked my Plex jail and it is still using newreno despite Freenas using htcp.

Sorry. I can't answer that, I've never used a jail on FreeNAS.
 
Status
Not open for further replies.
Top