[2019]: Network tunables for 10G NICs

Wanderhoden.

Dabbler
Joined
Oct 13, 2018
Messages
14
Hey there,

I found some old threads regarding this topic, but they seem to be outdated / deliver no essential information.

After upgrading to 10 gbe recently I did observe that the network with iperf3 was quite slow. I just peaked at around 4 GBit/s on both sites.

On my windows machine (Asus XG-100C) I was able to do some tuning (buffer increasement, jumboframes 9k byte,...)
Now the iperf looks not perfect, but a lot better from the clients side:

Code:
iperf3 - Windows to FreeNAS

[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   805 MBytes  6.75 Gbits/sec
[  5]   1.00-2.00   sec   882 MBytes  7.40 Gbits/sec
[  5]   2.00-3.00   sec   907 MBytes  7.61 Gbits/sec
[  5]   3.00-4.00   sec   913 MBytes  7.66 Gbits/sec
[  5]   4.00-5.00   sec   909 MBytes  7.63 Gbits/sec
[  5]   5.00-6.00   sec   845 MBytes  7.09 Gbits/sec
[  5]   6.00-7.00   sec   868 MBytes  7.28 Gbits/sec
[  5]   7.00-8.00   sec   895 MBytes  7.51 Gbits/sec
[  5]   8.00-9.00   sec   913 MBytes  7.66 Gbits/sec
[  5]   9.00-10.00  sec   902 MBytes  7.57 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  8.63 GBytes  7.42 Gbits/sec                  sender
[  5]   0.00-10.00  sec  8.63 GBytes  7.42 Gbits/sec                  receiver

iperf Done.


From the servers side (Intel x540-T1) the iperf still only peaks at around 4 GBit/s:


Code:
iperf3 - FreeNAS to Windows


[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   457 MBytes  3.83 Gbits/sec    0    210 KBytes
[  5]   1.00-2.00   sec   461 MBytes  3.87 Gbits/sec    0    210 KBytes
[  5]   2.00-3.00   sec   448 MBytes  3.76 Gbits/sec    0    210 KBytes
[  5]   3.00-4.00   sec   436 MBytes  3.66 Gbits/sec    0    210 KBytes
[  5]   4.00-5.00   sec   457 MBytes  3.83 Gbits/sec    0    210 KBytes
[  5]   5.00-6.00   sec   425 MBytes  3.57 Gbits/sec    0    210 KBytes
[  5]   6.00-7.00   sec   474 MBytes  3.97 Gbits/sec    0    210 KBytes
[  5]   7.00-8.00   sec   467 MBytes  3.92 Gbits/sec    0    210 KBytes
[  5]   8.00-9.00   sec   482 MBytes  4.05 Gbits/sec    0    210 KBytes
[  5]   9.00-10.00  sec   481 MBytes  4.04 Gbits/sec    0    210 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  4.48 GBytes  3.85 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  4.48 GBytes  3.85 Gbits/sec                  receiver

iperf Done.



Are there any tunables besides jumbo frames that I can make to improve the network speed?


Relevant server stats:
- FreeNAS 11.2 U6
- Intel x540-T1 (valid) in PCIe x16 slot
- cat7 cabling (2 meters, peer 2 peer)
- Gigabyte GA X150-M PRO
- Xeon E1220v5
- 24 GB ECC RAM
 
Last edited:

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
PCIe speed on the motherboard inside the server? So the 10G nic is connected way to slow?
 

Wanderhoden.

Dabbler
Joined
Oct 13, 2018
Messages
14
PCIe connection on motherboard: 3.0 x16
PCIe connector on network card: 2.0 x8

This shouldnt be the problem since I can receive with almost 8 GBit/s


I found some useful postings and did the tuning from the 10G network primer and also from 45Drives but most of the settings were set correctly already.

Warning: Some tunables killed my nginx service (Web UI). They even prevented to start it again (code 55: no buffer space available). I had to manually set back the values via command line to access the web interface again.

This was especially with the values sysctl net.inet.tcp.sendspace and sysctl net.inet.tcp.recvspace
 
Top