Guide to Optimize Network Settings for > 1G Wan?

jakesteele

Cadet
Joined
Oct 3, 2023
Messages
3
Hey,

I was wondering if there was a guide anywhere I could read for optimizing Truenas Scale for 5G WAN? I run 2x 10G LACP internal networking from my server to my router, and my router has a 5G/5G WAN attached to the 10G port. My server is used as a seedbox and for Plex.

I have 6x HDDs Ironwolf NAS Pros, 2x 2TB Samsung 990 PROs as Metadata and small files (128k), and 1x 2TB Samsung 990 Pro as L2ARC. I maxed my RAM out at 64GB DDR3. I have no more ports left in my computer to add more storage, so I guess next, I focus on networking.

I wondered if there was a guide to improve the networking side of Truenas for over 1G networking. I found one recently, but it was only for the FreeBSD version, not Scale.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Check the Resources section for the 10 gig tuning guide. A lot of that is applicable, or at least gives hints as to, things that you can tune for WAN.

Do note that LACP on 10G may actually impair performance.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
5G WAN o_O

I was doing cartwheels when 50/15Mbit became available to me a short time ago........
 

gdreade

Dabbler
Joined
Mar 11, 2015
Messages
34
Do note that LACP on 10G may actually impair performance.

Do you have any references on this? I'm not seeing it in the 10G+ guide.
What kind of perf impact are we talking about, and under what circumstances? (I'm more concerned about the TrueNAS<-->switch LAG rather than a switch<-->client LAG.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Link aggregation is a software defined ethernet interface that in turn distributes packets amongst available member interfaces using some hashing algorithm. Conventional ethernet usually uses direct memory access (DMA) to pass data directly from the host memory to the controller's interface; the host driver generates a command to the ethernet controller saying "transmit the 1.5KB block at address @someblock to the node with ethernet address @arpaddr" at which point the ethernet silicon reaches into the system's memory and begins transmitting. By way of comparison, with a software defined device, the link aggregation driver has to handle the packet inside the LAG driver to identify what needs to happen, then it has to use the host driver to generate a command to the appropriate silicon device. Receiving may even be a bit more complicated, and you don't get the benefits of offload, so you can actually be hurting your performance. If you have plenty of spare CPU, it may not matter that much, but there are tradeoffs involved any time you start to involve software defined ethernet interfaces.
 
Top