Poor LAN TCP Performance

Status
Not open for further replies.

Mootsfox

Cadet
Joined
Aug 8, 2012
Messages
5
Doing some benchmarking on my newly built FreeNAS box, and trying to troubleshoot my TCP performance in iperf.

On default settings, I'm doing 0.3Gb/s max. Sometimes after a few minutes of benchmarking, it drops to ~.17Gb/s.

Hardware is as follows:

SuperMicro X58 w/ Dual Intel NICs
Intel i7-960
24GB of RAM
8*2TB Samsung drives in RAID-Z2

Cat5e cables between everything
Linksys E2000 (gigabit) <-Used to confirm gigabit connections

Cilents:
HP tablet laptop with gigabit connection
Desktop with i7-870, 8GB RAM, gigabit connection

I about 99.9% sure that hardware is not the problem because the speeds are the same between the laptop and desktop, and I just swapped motherboards in the FreeNAS system with no change in performance. As well as changing LAN ports on both motherboards, which makes me think it's a setting somewhere within FreeNAS.

My question is if there are settings in the TCP stack that can be edited to make this thing actually perform at gigabit speeds.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You didn't mention your hard drives, raid type or hard drive controllers you are using.

If you are using a 24 port PCI card you will have significant performance issues because PCI bus is limited to 133MB/sec. You certainly have a bottleneck somewhere though. That CPU and 24GB of RAM should rock unless you have a hardware issue or something like a 50TB array.

Edit: Have you tried using iperf server that is built in with FreeNAS? That might give you a better indication of performance of the server.

Have you tried doing a DD benchmark of your hard drives?
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
Why the hard drives or anything related would matter? It is just network performance... iperf...

@Mootsfox, you can check if you're hitting the wall on mbufs... netstat -m

If thats the case you can increase it with kern.ipc.nmbclusters as a Tunable in the gui...
 

Mootsfox

Cadet
Joined
Aug 8, 2012
Messages
5
You didn't mention your hard drives, raid type or hard drive controllers you are using.

If you are using a 24 port PCI card you will have significant performance issues because PCI bus is limited to 133MB/sec. You certainly have a bottleneck somewhere though. That CPU and 24GB of RAM should rock unless you have a hardware issue or something like a 50TB array.

Edit: Have you tried using iperf server that is built in with FreeNAS? That might give you a better indication of performance of the server.

Have you tried doing a DD benchmark of your hard drives?

HDDs are Samsungs, 2TB, connected to the board.

Thing is, I'm not even testing them yet. At the Network connection level is where I'm concerned, because that is the clear bottleneck right now.

I open shell and type "/usr/local/bin/iperf -sD" and then start jperf on the client.

This box is currently sitting at ~10TB with 8 disks, but when complete, the goal is 24*2TB disks in three arrays doing ~30TB of storage. Cards will be PCI-e 8x with Dual SAS connections for the backplane of the case.

Right now I'm worried about TCP performance and why it's 1/3 of what it should be.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
From his message it sounded like he did iperf between his desktop and laptop but not between the server and his desktop or laptop. If the server can't retrieve the data from the array fast enough, or write it fast enough, that would cause poor performance.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I can get 120MB/sec+ with zero tweaking. Your server should be able to do the same without breaking a sweat.

Edit: Sent you a PM.
 

Mootsfox

Cadet
Joined
Aug 8, 2012
Messages
5
Well now I'm more confused.

Running the laptop as a server in jperf and the desktop as the client, I get the same jperf benchmark results (~0.3Gb/s).

So running:
FreeNAS -> Desktop
FreeNAS -> Laptop
Laptop -> Desktop

All get roughly the same 1/3 gigabit transfer rates.

Cables have been rotated around without any change in results.

Is it possible that jperf is causing the issue? I only get around 20MB/s in actual data transfer to the NAS unit anyways.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I doubt jperf is the issue. It sounds like you have a bad cable(or cables) or a Gb switch that sucks. I know I have a "green" 5 port and 8 port switch that won't go above about 600Mb/sec from one port to the other. Really sucks. I'll never buy a "green" network switch again.

Jperf and iperf were designed to minimize CPU load and maximize network traffic to prevent CPU bottlenecks from giving you an artificially low network connection.

Edit: I'd try doing a direct connect using a crossover cable(may not be needed) and do an iperf test.
 

Mootsfox

Cadet
Joined
Aug 8, 2012
Messages
5
I doubt jperf is the issue. It sounds like you have a bad cable(or cables) or a Gb switch that sucks. I know I have a "green" 5 port and 8 port switch that won't go above about 600Mb/sec from one port to the other. Really sucks. I'll never buy a "green" network switch again.

Jperf and iperf were designed to minimize CPU load and maximize network traffic to prevent CPU bottlenecks from giving you an artificially low network connection.

Edit: I'd try doing a direct connect using a crossover cable(may not be needed) and do an iperf test.

I believed the switch to be the problem (I've been through three now). However, the results have not changed. In my previous post where I listed the "So Running:" setups, those were all directly connected with a crossover cable (which you're right, was needed). For the two setups testing the FreeNAS box, I would test on one for about 10 minutes, then configure the second NIC on the board and test on that. I did that with both boards and both the laptop and desktop. Already I've spent about 20 hours on this headache. I greatly appreciate any feedback given :)
 
J

jpaetzel

Guest
Paste the output of netstat -m on the FreeNAS box please.
 

Mootsfox

Cadet
Joined
Aug 8, 2012
Messages
5
Left it running overnight (20-21MB/s) and got this error:

ud7za.jpg


I had this error in the past, after about 15 minutes, and guessed it was a bad RAM slot, ran without RAM in that slot on the motherboard and it didn't error out. I sent that RAM through memtest for about 30 passes and it was 100%. Now I'm thinking that the error might be related to the CPU.

Paste the output of netstat -m on the FreeNAS box please.

IicFa.jpg
 
J

jpaetzel

Guest
That panic does look a lot like faulty hardware. As far as the netstat output, it doesn't look like you are running out of mbufs or any other resources.
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
I can get 120MB/sec+ with zero tweaking.
I used to get that also, on 8.0.2 release. Since I upgraded to latest version, the speed performance is cut in half... obviously I did not changed anything on the hardware or network. See my FreeNAS setup in signature. Right now I get 55MB/sec from PC to NAS and 75MB/sec from NAS to PC.
 

yaneurabeya

Dabbler
Joined
Sep 5, 2012
Messages
26
Left it running overnight (20-21MB/s) and got this error:

ud7za.jpg


I had this error in the past, after about 15 minutes, and guessed it was a bad RAM slot, ran without RAM in that slot on the motherboard and it didn't error out. I sent that RAM through memtest for about 30 passes and it was 100%. Now I'm thinking that the error might be related to the CPU.

I wouldn't be so sure about hardware issues (unless it's an ACPI/BIOS firmware bug). See this thread for more tips: http://freebsd.1045724.n5.nabble.com/general-protection-fault-panic-td5590862.html .
 
Status
Not open for further replies.
Top