SOLVED Slow iperf with onboard Intel 82574L / Intel 82576EB

Status
Not open for further replies.

NoTalent

Dabbler
Joined
Jun 24, 2013
Messages
28
My iperf speeds between my test laptop and my FreeNAS server are very slow, around 170-180Mbits/second

Code:
[root@freenas] ~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.12 port 5001 connected with 192.168.1.3 port 53569
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-60.0 sec  1.27 GBytes   181 Mbits/sec


I'm not sure where to go from here, any help would be appreciated.

System:
Build
FreeNAS-9.3-STABLE-201602031011
Platform AMD Opteron(tm) Processor 6128
Memory 32740MB

System NIC setup:
My motherboard has 4 Gigabit ports:
The Intel 82576EB is a dual port NIC wired up on a x4 PCIe port.
The other two ports are each a Intel 82574L connected to a x1 PCIe lane.
The system block diagram is Figure 2.2 from this document: http://www.tyan.com/manuals/S8230_UG_v1.0_06212012.pdf
tyan8230.PNG

Client/Server setup:
My FreeNAS box has a 6 foot Cat 6 network cable (new from package) running directly connected from em0 to my laptop Gigabit port.

I manually set a static IP on the FreeNAS box as well as the windows box.

FreeNAS config data:
Code:
[root@freenas] ~# ifconfig
igb0: flags=8c02<BROADCAST,OACTIVE,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO>
        ether 00:e0:81:c5:9e:fe
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect
        status: no carrier
igb1: flags=8c02<BROADCAST,OACTIVE,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO>
        ether 00:e0:81:c5:9e:ff
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect
        status: no carrier
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=4219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC,VLAN_HWTSO>
        ether 00:e0:81:c5:9c:81
        inet 192.168.1.12 netmask 0xffffff00 broadcast 192.168.1.255
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
em1: flags=8c02<BROADCAST,OACTIVE,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=4219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC,VLAN_HWTSO>
        ether 00:e0:81:c5:9c:80
        nd6 options=9<PERFORMNUD,IFDISABLED>
        media: Ethernet autoselect
        status: no carrier
ipfw0: flags=8801<UP,SIMPLEX,MULTICAST> metric 0 mtu 65536
        nd6 options=9<PERFORMNUD,IFDISABLED>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0xd
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>


dmesg output:
Code:
[root@freenas] ~# dmesg | more
em0: <Intel(R) PRO/1000 Network Connection 7.4.2> port 0xd800-0xd81f mem 0xfe9e0000-0xfe9fffff,0xfe9dc000-0xfe9dffff irq 48 at device 0.0 on pci6
em0: Using MSIX interrupts with 3 vectors
em0: Ethernet address: 00:e0:81:c5:9c:81
pcib3: <ACPI PCI-PCI bridge> irq 54 at device 10.0 on pci0
pci5: <ACPI PCI bus> on pcib3
em1: <Intel(R) PRO/1000 Network Connection 7.4.2> port 0xc800-0xc81f mem 0xfe8e0000-0xfe8fffff,0xfe8dc000-0xfe8dffff irq 47 at device 0.0 on pci5
em1: Using MSIX interrupts with 3 vectors
em1: Ethernet address: 00:e0:81:c5:9c:80

igb0: <Intel(R) PRO/1000 Network Connection version - 2.4.0> port 0xe400-0xe41f mem 0xfeac0000-0xfeadffff,0xfea80000-0xfea9ffff,0xfea40000-0xfea43fff irq 44 at device 0.0 on pci7
igb0: Using MSIX interrupts with 9 vectors
igb0: Ethernet address: 00:e0:81:c5:9e:fe
igb0: Bound queue 0 to cpu 0
igb0: Bound queue 1 to cpu 1
igb0: Bound queue 2 to cpu 2
igb0: Bound queue 3 to cpu 3
igb0: Bound queue 4 to cpu 4
igb0: Bound queue 5 to cpu 5
igb0: Bound queue 6 to cpu 6
igb0: Bound queue 7 to cpu 7
igb1: <Intel(R) PRO/1000 Network Connection version - 2.4.0> port 0xe800-0xe81f mem 0xfebc0000-0xfebdffff,0xfeb80000-0xfeb9ffff,0xfeb40000-0xfeb43fff irq 45 at device 0.1 on pci7
igb1: Using MSIX interrupts with 9 vectors
igb1: Ethernet address: 00:e0:81:c5:9e:ff
igb1: Bound queue 0 to cpu 0
igb1: Bound queue 1 to cpu 1
igb1: Bound queue 2 to cpu 2
igb1: Bound queue 3 to cpu 3
igb1: Bound queue 4 to cpu 4
igb1: Bound queue 5 to cpu 5
igb1: Bound queue 6 to cpu 6
igb1: Bound queue 7 to cpu 7

Note, I assume that the igb0 & igb1 NIC is the 82576EB since both of the ports are on the same pci bus.

Things I have changed:
  • I tried 3 different network cables per the suggestions in other threads in the FreeNAS forum.
  • I have tried all 4 network ports, none show a increase in speed.

Things I need to do for troubleshooting:
  • Try another client laptop/desktop machine.
  • Take this client laptop and see its iperf performance with other machines.
  • Get a Intel PCIe network card and try

Do any of the settings above look off? Any settings I can try with my FreeNAS install?
 

NoTalent

Dabbler
Joined
Jun 24, 2013
Messages
28
OK, I think i answered my own question.

I pulled a old desktop out running SLES and installed iperf on it.

743 Mbit/sec and that system has a realtek nic which I thought were terrible.

The laptop has a Qualcomm Atheros AR8161 Gigabt controller. Maybe I need to update a driver or something?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Realteks and Qualcomm Atheros are about equally crappy on Windows. I believe the Realteks have crappier hardware, but more software hiding the defects with CPU time.
 

NoTalent

Dabbler
Joined
Jun 24, 2013
Messages
28
Thank you Ericloewe. I've finally got the files transferred over, but still not at a speed I think should be acceptable.

Once I compile the data, I'll post a new thread on trying to understand the bottleneck I'm seeing.

My connection looked like:
Desktop running SLES -> Realteck Onboard NIC -> Direct connect cat 6 cable -> Intel eth0 -> nfs share setup with sharing wizard -> FreeNas 9.

Iperf gave me 743Mbit/sec
zpool is a 8 drive RAIDZ2 volume.
dd gave me ~180Mbytes/sec write, faster than that on reads.

My write speeds to the pool from my desktop were averaging ~30 Mbytes/sec. Am I wrong to assume that since Iperf is running at ~90 Mbytes/sec, and my dd write speeds to the pool were above that--I should be getting greater than 30 Mbytes/sec?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What dd command did you use to test your pool speed. Your local write speeds should be around 450MB/s. Something is wrong with your pool.
 

NoTalent

Dabbler
Joined
Jun 24, 2013
Messages
28
I used the tests described at the top of the Storage section in Help & Support :

dd if=/dev/zero of=tmp.dat bs=2048k count=50k
and
dd if=tmp.dat of=/dev/null bs=2048k count=50k

Can you point me to a thread on analyzing the zpool drive speeds?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Your dd command looks ok. I suspect your hardware is limiting your speeds. Start by checking the smart status of your disks and run a smart long test.
 

NoTalent

Dabbler
Joined
Jun 24, 2013
Messages
28
Marking this thread as SOLVED.

The laptop I was using was limiting the iperf test results. Now that I have my box connected to the network--I'm getting ~100MB/s transfer rate from my windows box to/from the CIFS share.
 
Status
Not open for further replies.
Top