Network speeds being limited?

Status
Not open for further replies.

JoeB

Contributor
Joined
Oct 16, 2014
Messages
121
I have a gigabit lan connection to freenas.

I have run iperv as follows:

[root@JOE-FREENAS] ~# iperf -sD
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
Running Iperf Server as a daemon
The Iperf daemon process ID : 52417
[root@JOE-FREENAS] ~#


and on windows, i'm running jperv 2.0.2

I ran several 10 sec tests, and the results are as follows:

1 stream: [164] 9.0-10.0 sec 33.3 MBytes 279 Mbits/sec
2 streams: [SUM] 9.0-10.0 sec 50.9 MBytes 427 Mbits/sec
3 streams: [SUM] 9.0-10.0 sec 68.2 MBytes 572 Mbits/sec
4 streams: [SUM] 9.0-10.0 sec 91.6 MBytes 768 Mbits/sec
5 streams: [SUM] 9.0-10.0 sec 95.6 MBytes 802 Mbits/sec
6 streams: [SUM] 9.0-10.0 sec 101 MBytes 850 Mbits/sec

10 srtreams: [SUM] 9.0-10.0 sec 109 MBytes 910 Mbits/sec

So from above, my network is maxing out at about 900 Mbps, which is about what i'd expect as its cat 5 but what is limiting each connection to about 200mbps ?
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633

JoeB

Contributor
Joined
Oct 16, 2014
Messages
121
Its an HP N54L g7 microserver. Ive fitted 16G eec ram

Processor:
AMD Turion™ II Neo N54L (2.2GHz)
AMD RS785E/SB820M chipset
Memory:
Two (2) DIMM slots
4GB (1x4GB) Standard/8GB Maximum, using PC3-10600E DDR3 Unbuffered (UDIMM) ECC memory, operating at max. 800MHz
Storage Controller:
Embedded AMD SATA controller with RAID 0, 1
Embedded AMD eSATA controller for connecting external storage devices via the eSATA connector in the rear of the server
Internal Drive Support:
4 Internal HDD Support
Maximum internal SATA storage capacity of up to 8.0TB (4 x 2TB 3.5" SATA drives)
Network Controller:
Embedded NC107i PCI Express Gigabit Ethernet Server Adapter
Expansion Slots:
Slot 1: PCI-Express Gen 2 x16 connector with x16 link
Slot 2: PCI-Express Gen 2 x1 connector with x1 Link
Slot 2-2: PCI-Express x4 slot for optional management card



Sent from my GT-I9195 using Tapatalk
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'm less likely to say it's a performance/thearding limitation, because the N45L has two cores, so I'd think you'd see most of your speed with 2 streams, rather than 10.

What about your desktop? Do you have any AV or network process running? Can you try booting a live CD linux and re-running the test?
 

JoeB

Contributor
Joined
Oct 16, 2014
Messages
121
I found that there was an issue as i was copying data to a readynas box overnight and the network graph in freenas was showing only 300mbps constant and 20% cpu when it finished.


Sent from my GT-I9195 using Tapatalk
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
That makes is sound like FreeNAS is not the limited party.

Some things that would be helpful:
  • Specific details of what tests you are running.
  • Any other tests that you've run where things weren't as you expected them (and specific details of those: what were you doing, what kind of files were you moving, etc)
  • Tests run with large files over CIFS or NFS, checking throughput performance.
  • Swapping the client/server jperf role in testing to check if the problem is symmetric.
  • Testing with different clients to see if the problem is from your client.
 

JoeB

Contributor
Joined
Oct 16, 2014
Messages
121
Ok i'll do some testing. The above graph was moving a whole bunch of 29GB files to readynas which was a nfs server using the shell cp command.
 

philhu

Patron
Joined
May 17, 2016
Messages
258
To add more to this. I put in enhancements to windows driver performance and the 10gbe network tunables for freenas.

Connecting iperf on my FreeNAS 9.10 SuperMicro x8dtn+ box, 2 quad 2.83ghz xeons, to a win10 i7-hexcore processor with 32G memory, running iperf 2.0.5. Both machines running Intex x540-t2 10GBE cards

I did manage to max out my 10GBe line, but it took 15 concurrent streams. With 8 concurrent streams, it hit 1097 with each stream averaging about 135 mb/s

15 streams-maxed out line:
Code:
D:\iperf-2.0>iperf -c 192.168.1.49 -p 5001  -f M  -t 10 -P 15
------------------------------------------------------------
Client connecting to 192.168.1.49, TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[ 10] local 192.168.1.180 port 52668 connected with 192.168.1.49 port 5001
[ 17] local 192.168.1.180 port 52675 connected with 192.168.1.49 port 5001
[ 16] local 192.168.1.180 port 52674 connected with 192.168.1.49 port 5001
[ 15] local 192.168.1.180 port 52673 connected with 192.168.1.49 port 5001
[ 13] local 192.168.1.180 port 52671 connected with 192.168.1.49 port 5001
[ 12] local 192.168.1.180 port 52670 connected with 192.168.1.49 port 5001
[ 14] local 192.168.1.180 port 52672 connected with 192.168.1.49 port 5001
[ 11] local 192.168.1.180 port 52669 connected with 192.168.1.49 port 5001
[  8] local 192.168.1.180 port 52666 connected with 192.168.1.49 port 5001
[  7] local 192.168.1.180 port 52665 connected with 192.168.1.49 port 5001
[  5] local 192.168.1.180 port 52663 connected with 192.168.1.49 port 5001
[  6] local 192.168.1.180 port 52664 connected with 192.168.1.49 port 5001
[  4] local 192.168.1.180 port 52662 connected with 192.168.1.49 port 5001
[  3] local 192.168.1.180 port 52661 connected with 192.168.1.49 port 5001
[  9] local 192.168.1.180 port 52667 connected with 192.168.1.49 port 5001
[ ID] Interval  Transfer  Bandwidth
[ 17]  0.0-10.0 sec  632 MBytes  63.3 MBytes/sec
[ 16]  0.0-10.0 sec  639 MBytes  63.9 MBytes/sec
[ 15]  0.0-10.0 sec  639 MBytes  63.9 MBytes/sec
[ 13]  0.0-10.0 sec  881 MBytes  88.1 MBytes/sec
[ 12]  0.0-10.0 sec  892 MBytes  89.2 MBytes/sec
[ 14]  0.0-10.0 sec  648 MBytes  64.8 MBytes/sec
[ 11]  0.0-10.0 sec  851 MBytes  85.2 MBytes/sec
[  8]  0.0-10.0 sec  650 MBytes  65.0 MBytes/sec
[  7]  0.0-10.0 sec  644 MBytes  64.4 MBytes/sec
[  5]  0.0-10.0 sec  860 MBytes  86.1 MBytes/sec
[  6]  0.0-10.0 sec  634 MBytes  63.4 MBytes/sec
[  4]  0.0-10.0 sec  861 MBytes  86.1 MBytes/sec
[  3]  0.0-10.0 sec  966 MBytes  96.7 MBytes/sec
[  9]  0.0-10.0 sec  649 MBytes  64.9 MBytes/sec
[ 10]  0.0-10.0 sec  842 MBytes  84.1 MBytes/sec
[SUM]  0.0-10.0 sec  11288 MBytes  1127 MBytes/sec


8 streams, almost maxed:
Code:
D:\iperf-2.0>iperf -c 192.168.1.49 -p 5001  -f M  -t 10 -P 8
------------------------------------------------------------
Client connecting to 192.168.1.49, TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[ 10] local 192.168.1.180 port 52631 connected with 192.168.1.49 port 5001
[  9] local 192.168.1.180 port 52630 connected with 192.168.1.49 port 5001
[  5] local 192.168.1.180 port 52626 connected with 192.168.1.49 port 5001
[  8] local 192.168.1.180 port 52629 connected with 192.168.1.49 port 5001
[  7] local 192.168.1.180 port 52628 connected with 192.168.1.49 port 5001
[  6] local 192.168.1.180 port 52627 connected with 192.168.1.49 port 5001
[  3] local 192.168.1.180 port 52624 connected with 192.168.1.49 port 5001
[  4] local 192.168.1.180 port 52625 connected with 192.168.1.49 port 5001
[ ID] Interval  Transfer  Bandwidth
[ 10]  0.0-10.0 sec  1360 MBytes  136 MBytes/sec
[  9]  0.0-10.0 sec  1392 MBytes  139 MBytes/sec
[  5]  0.0-10.0 sec  1360 MBytes  136 MBytes/sec
[  8]  0.0-10.0 sec  1378 MBytes  138 MBytes/sec
[  7]  0.0-10.0 sec  1332 MBytes  133 MBytes/sec
[  6]  0.0-10.0 sec  1360 MBytes  136 MBytes/sec
[  3]  0.0-10.0 sec  1420 MBytes  142 MBytes/sec
[  4]  0.0-10.0 sec  1366 MBytes  137 MBytes/sec
[SUM]  0.0-10.0 sec  10969 MBytes  1097 MBytes/sec



Single stream, I got 321 up to NAS and 354 down

Code:
D:\iperf-2.0>iperf -c 192.168.1.49 -p 5001  -f M  -t 10 --dualtest -P 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.49, TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.180 port 55259 connected with 192.168.1.49 port 5001
[  5] local 192.168.1.180 port 5001 connected with 192.168.1.49 port 34730
[ ID] Interval  Transfer  Bandwidth
[  4]  0.0-10.0 sec  3208 MBytes  321 MBytes/sec
[  5]  0.0-10.0 sec  3548 MBytes  354 MBytes/sec


So, it looks thread limited. And confirms my previous finding that single copy speed is not amazing but ok, but if you add more simultaneous copies, they all run well, up to line max speed.

If there is anything else someone wants me to test, let me know
 
Joined
Jun 2, 2016
Messages
13
iperf 2.0.5 has insufficient memory between the reporter thread and the traffic thread(s) so it spends a lot of time waiting on a mutex. I'd suggest iperf 2.0.9 where this bottleneck has been addressed or iperf 3 which I believe doesn't use threading hence no need for the mutex.

Bob
 

philhu

Patron
Joined
May 17, 2016
Messages
258
I tried iperf 2.0.9. The numbers are about the same for some - a little worse on others. Example is the 8 thread test:
Code:
D:\iPerf_network\iperf-2.0.9-win32>iperf -c 192.168.1.49 -p 5001  -f M  -t 10 -P 8
------------------------------------------------------------
Client connecting to 192.168.1.49, TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.180 port 64991 connected with 192.168.1.49 port 5001
[  5] local 192.168.1.180 port 64996 connected with 192.168.1.49 port 5001
[  9] local 192.168.1.180 port 64995 connected with 192.168.1.49 port 5001
[ 10] local 192.168.1.180 port 64997 connected with 192.168.1.49 port 5001
[  3] local 192.168.1.180 port 64990 connected with 192.168.1.49 port 5001
[  8] local 192.168.1.180 port 64994 connected with 192.168.1.49 port 5001
[  6] local 192.168.1.180 port 64992 connected with 192.168.1.49 port 5001
[  7] local 192.168.1.180 port 64993 connected with 192.168.1.49 port 5001
[ ID] Interval  Transfer  Bandwidth
[  4]  0.0-10.0 sec  664 MBytes  66.4 MBytes/sec
[  5]  0.0-10.0 sec  1260 MBytes  126 MBytes/sec
[  9]  0.0-10.0 sec  677 MBytes  67.7 MBytes/sec
[ 10]  0.0-10.0 sec  1136 MBytes  114 MBytes/sec
[  3]  0.0-10.0 sec  677 MBytes  67.7 MBytes/sec
[  8]  0.0-10.0 sec  674 MBytes  67.4 MBytes/sec
[  6]  0.0-10.0 sec  1172 MBytes  117 MBytes/sec
[  7]  0.0-10.0 sec  1192 MBytes  119 MBytes/sec
[SUM]  0.0-10.0 sec  7452 MBytes  745 MBytes/sec


Here is single thread, showing about a 10% decrease:
Code:
D:\iPerf_network\iperf-2.0.9-win32>iperf -c 192.168.1.49 -p 5001  -f M  -t 10
------------------------------------------------------------
Client connecting to 192.168.1.49, TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[  3] local 192.168.1.180 port 65233 connected with 192.168.1.49 port 5001
[ ID] Interval  Transfer  Bandwidth
[  3]  0.0-10.0 sec  2983 MBytes  298 MBytes/sec


I would try iperf3, but I need the binary for freeNAS 9.10.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Joined
Jun 2, 2016
Messages
13
I tried iperf 2.0.9. The numbers are about the same for some - a little worse on others. Example is the 8 thread test:
Code:
D:\iPerf_network\iperf-2.0.9-win32>iperf -c 192.168.1.49 -p 5001  -f M  -t 10 -P 8
------------------------------------------------------------
Client connecting to 192.168.1.49, TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.180 port 64991 connected with 192.168.1.49 port 5001
[  5] local 192.168.1.180 port 64996 connected with 192.168.1.49 port 5001
[  9] local 192.168.1.180 port 64995 connected with 192.168.1.49 port 5001
[ 10] local 192.168.1.180 port 64997 connected with 192.168.1.49 port 5001
[  3] local 192.168.1.180 port 64990 connected with 192.168.1.49 port 5001
[  8] local 192.168.1.180 port 64994 connected with 192.168.1.49 port 5001
[  6] local 192.168.1.180 port 64992 connected with 192.168.1.49 port 5001
[  7] local 192.168.1.180 port 64993 connected with 192.168.1.49 port 5001
[ ID] Interval  Transfer  Bandwidth
[  4]  0.0-10.0 sec  664 MBytes  66.4 MBytes/sec
[  5]  0.0-10.0 sec  1260 MBytes  126 MBytes/sec
[  9]  0.0-10.0 sec  677 MBytes  67.7 MBytes/sec
[ 10]  0.0-10.0 sec  1136 MBytes  114 MBytes/sec
[  3]  0.0-10.0 sec  677 MBytes  67.7 MBytes/sec
[  8]  0.0-10.0 sec  674 MBytes  67.4 MBytes/sec
[  6]  0.0-10.0 sec  1172 MBytes  117 MBytes/sec
[  7]  0.0-10.0 sec  1192 MBytes  119 MBytes/sec
[SUM]  0.0-10.0 sec  7452 MBytes  745 MBytes/sec


Here is single thread, showing about a 10% decrease:
Code:
D:\iPerf_network\iperf-2.0.9-win32>iperf -c 192.168.1.49 -p 5001  -f M  -t 10
------------------------------------------------------------
Client connecting to 192.168.1.49, TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[  3] local 192.168.1.180 port 65233 connected with 192.168.1.49 port 5001
[ ID] Interval  Transfer  Bandwidth
[  3]  0.0-10.0 sec  2983 MBytes  298 MBytes/sec


I would try iperf3, but I need the binary for freeNAS 9.10.

The TCP window size seems a bit small. (The theory is the actual congestion window should about twice bandwidth delay.) Iperf 3 and the linux versions of iperf 2.0.9 will give congestion window (CWND), round trip time (RTT) and TCP retries (per the client on 2.0.9.) On 2.0.9 one has to use enhanced reports (-e) to get this output, with iperf3 I believe it's standard output. Setting the reportting interval to let's say 25 milliseconds might give some insights to the TCP sliding window. That would be a -i of 0.025. (For 2.0.9, the fastest supported is 5 ms or -i 0.005 where 2.0.5 fastest is 500 ms.)

Bob
 

JoeB

Contributor
Joined
Oct 16, 2014
Messages
121
This conversation very quickly went past my pay grade, but i'm still having this issue. I'm in the middle of downloading to freenas 5T of data (large files > 5G each) and the monitor window is showing 400mbps constant.

I have 6x SCP connections, each is showing about 7MB/s.

If i copy a file from Freenas to windows, the TX shows as 400mbps too.. Why is there a 400mbps bottle neck? Any ideas?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
This conversation very quickly went past my pay grade, but i'm still having this issue. I'm in the middle of downloading to freenas 5T of data (large files > 5G each) and the monitor window is showing 400mbps constant.

I have 6x SCP connections, each is showing about 7MB/s.

If i copy a file from Freenas to windows, the TX shows as 400mbps too.. Why is there a 400mbps bottle neck? Any ideas?
CPU encryption limitation for SCP?
 

philhu

Patron
Joined
May 17, 2016
Messages
258
We have determined it is very thread limited. Multiple threads. Ie. Multiple copies at once seem to all get to the limits, just not a single transaction/copy

Mine limits at 200. But 3 copies simultaeneously gets 175 each or 525 aggregate

I think Bob is working on compiling iperf 2.0.9 for freebsd, which will give much better stats non threaded
 

JoeB

Contributor
Joined
Oct 16, 2014
Messages
121
CPU encryption limitation for SCP?
Humm! Maybe, my CPU graph is showing 100%.

I cannot test right now as it's not finished the copy over yet, but when it does i'll try just a single cp, not scp and observe the speed and cpu.

I have 2x SCPs running at the moment now and the're both at 23mbps.
 
Status
Not open for further replies.
Top