10Gb NICs and iperf is showing 1Gb speeds

Status
Not open for further replies.

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Buying the compatible transceivers and relabeling them as extreme networks.
AHHHH!!! (ROLF) Would make some serious coin! LOL :D
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Try adding multiple threads to the iperf command and run longer then 10sec so it can stabilize till you reach a maximum throughput. You might be cpu/bus bound on the older hardware. Check the cpu use while its working on the transfer.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Code:
 iperf -c 10.10.10.242 -w 512k -P 4 -t 30
------------------------------------------------------------
Client connecting to 10.10.10.242, TCP port 5001
TCP window size:  513 KByte (WARNING: requested  512 KByte)
------------------------------------------------------------
[  4] local 10.10.10.252 port 52870 connected with 10.10.10.242 port 5001
[  5] local 10.10.10.252 port 45627 connected with 10.10.10.242 port 5001
[  6] local 10.10.10.252 port 48231 connected with 10.10.10.242 port 5001
[  3] local 10.10.10.252 port 49172 connected with 10.10.10.242 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-30.0 sec  2.03 GBytes   582 Mbits/sec
[  5]  0.0-30.0 sec  2.04 GBytes   584 Mbits/sec
[  6]  0.0-30.0 sec  2.19 GBytes   626 Mbits/sec
[  3]  0.0-30.0 sec  2.02 GBytes   577 Mbits/sec
[SUM]  0.0-30.0 sec  8.28 GBytes  2.37 Gbits/sec
[patrick@storage] ~% iperf -c 10.10.10.242 -w 512k -P 2 -t 60
------------------------------------------------------------
Client connecting to 10.10.10.242, TCP port 5001
TCP window size:  513 KByte (WARNING: requested  512 KByte)
------------------------------------------------------------
[  4] local 10.10.10.252 port 38073 connected with 10.10.10.242 port 5001
[  3] local 10.10.10.252 port 54512 connected with 10.10.10.242 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-60.0 sec  8.87 GBytes  1.27 Gbits/sec
[  3]  0.0-60.0 sec  8.62 GBytes  1.23 Gbits/sec
[SUM]  0.0-60.0 sec  17.5 GBytes  2.50 Gbits/sec
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Yep, Just tried that. In the screen shot that I attached you see the first few are a direct connect with the Finisar modules, all tests with a 3 foot blue 10Gb fiber cable found here... . The second test at 60 seconds is through the switch and the last test I pulled the Finisar modules and tested with the Chelsio module, direct connect.

192.168.1.1 is a SuperMicro x7dbn dual 3.2Ghz Xeon dual cores w/HT and 32GB of RAM.
192.168.1.2 is a SuperMicro x8DTH-iF, single 3.46Ghz 6 core w/HT and 48GB of RAM.

Both were running Linux mint for the test.

So it seems I can ALMOST get 6Gb/s running linux mint, which has me wondering why not on FreeNAS but I'm still unable to pull 10Gb, I'm still about 4Gb shy.

So, other than checking firmware, which I haven't done yet. Any more ideas?
 

Attachments

  • MintLive.JPG
    MintLive.JPG
    149.8 KB · Views: 563

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
I ran some other tests. Still seeing great speads on the new server. Old server is shut down at the moment

Code:
Server listening on TCP port 5001
Binding to local address 127.0.0.1
TCP window size:  256 KByte (default)
------------------------------------------------------------
[  4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 28817
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  41.8 GBytes  35.9 Gbits/sec
^C[patrick@storage] ~% iperf -B 127.0.0.1 -s -w 512k
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 127.0.0.1
TCP window size:  512 KByte
------------------------------------------------------------
[  4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 26307
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  44.9 GBytes  38.5 Gbits/sec
^C[patrick@storage] ~% iperf -B 127.0.0.1 -s -w 1024k
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 127.0.0.1
TCP window size: 1.00 MByte
------------------------------------------------------------
[  4] local 127.0.0.1 port 5001 connected with 127.0.0.1 port 43938
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  44.8 GBytes  38.5 Gbits/sec
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
yep, the thought was to see how much theoretical speed I could achieve.
It was a thought.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
127.0.0.1 would not touch your card, just to get sure it is clear
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Well, as I said, in theory. I really have NO idea what to do next other than do a firmware check but even then I'm not sure what to run at the CL to determine what the current firmware version is on the card.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't get it? for?

Printing up tiny labels that read "Extreme Networks", of course. :smile:

The point is that in general there's only a few companies on the planet that make the actual optics, and often/usually you can get optics that are electronically and physically identical, because Extreme Networks buys theirs from (guessing here) Finisar. So it's extremely common for large networking organizations to buy an SFP programmer and use generic optics. Your current Internet traffic is, virtually guaranteed, going over at least some generic optics right now.

I find it simpler and cheaper just to buy used optics and test them. For example, the Dell gear we use here (actually a Force10 Networks design), I was finding the optics on eBay for about $20-$25 each, and at that price, you can't even get the generic optics new for that price, much less the cost of the programmer.

Usually there'll be some smart guys out there selling "private label" versions of the SFP's which are just preprogrammed to look legit. You could take a laser printer and make yourself some legit-looking teeny SFP labels and the only way an Extreme Networks engineer would be likely to notice would be if they started validating serial numbers. :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In re-reading through the rest of this, I'm thinking that your old X7 may just be out of steam. You might try linking the two servers directly together and configuring jumbo on that, just to see... if it's merely out of oomph to handle packet-per-second, that'll increase speeds dramatically, but if you're out of some sort of internal bandwidth, it won't make a big diff. Basically a lot of the old gear is never likely to consistently get close to 10G, especially when you consider all the NAS protocol stuff sitting on top. What I'm saying is that if you can only get 5Gbps with iperf, you're going to get a lot less when you factor in the NAS protocol, ZFS, etc.
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
Good thread - sorry you are having grief , I know how frustrating it can be.

Firstly, testing loopback/127.0.0.1 performance is a useful step, it ensures your PC has CPU overhead to handle 10gig speeds.

The problem is most systems are optimised for 1g out the door so you need to tune appropriately. Do not jump to 9000 byte packets until you have maxed out standard 1500bytes and are limited by CPU performance. You can monitor CPU core utilisation and interrupt usage with `top` and will see what your CPU is doing. Going to 9k packets reduced the interrupt frequency but increases latency in the network an you should decide from the off what is more important to you. You either tune for latency or throughput, they tend to affect one another to a small degree and its possible to get 9Gbps throughput at high latencies which makes the network feel 'slower' than what the numbers suggest .

Its also useful to look at the complete stack of buffers as a big one rather than each piece individually. What I mean by this is that tuning the server and ignoring the client will not result in great perf, only when everything is in sync does it fly - adding in switches adds another send/recv buffer you need to consider so its worth doing a direct connection from client to server to get things working and then add in switches/routers etc.

I found that the auto scale feature was a waste of time, maybe thats a harsh statement because in some use cases it might be ok but IME when it worked it reacted to too slowly. I found the easiest approach was to disable it and manually set the send/rcv windows manually. Its possible set them too large which will affect latency but 10g is moving a lot of data so 512byte windows are useless, you need 3-4Mbytes for send/recv at both ends IME.

I read a fair amount around understanding how networking stacks work before I got my head round this stuff so its worth time looking into the FreeBSD manuals and a good book on networking. Randomly tuning settings and stuff didn't work at all for me and copying stuff form the internets without understanding it was just a waste of time - there isn't enough people with experience of this stuff documenting it to cookbook it yet.

I've been using Intel x520 NIC's primarily because I use a Mac and needed 10G speeds form that and they work well enough. Ive just ordered a Chelsio T520-CR, T520-SO-CR as well as a T420-CR from eBay to do some benchmarking with as Im curious what the differences are. I'll try and make some time to document a process for tuning 10gig but I need to knock some rough edges of my understanding and notes before I commit them to public scrutiny.
 

wtfR6a

Explorer
Joined
Jan 9, 2016
Messages
88
In re-reading through the rest of this, I'm thinking that your old X7 may just be out of steam. You might try linking the two servers directly together and configuring jumbo on that, just to see... if it's merely out of oomph to handle packet-per-second, that'll increase speeds dramatically, but if you're out of some sort of internal bandwidth, it won't make a big diff. Basically a lot of the old gear is never likely to consistently get close to 10G, especially when you consider all the NAS protocol stuff sitting on top. What I'm saying is that if you can only get 5Gbps with iperf, you're going to get a lot less when you factor in the NAS protocol, ZFS, etc.

Good point, for reference my 2.3Ghz i7 MBP see's circa 50% CPU with flat out 10g transfers running. The overheads of FreeNAS/ZFS aren't inconsiderable. From your FreeNAS console, `top -P` will show you your CPU core utilisation and interrupt levels, its worth looking at them whilst your are running your iperf test. Let us know what you see.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Printing up tiny labels that read "Extreme Networks", of course. :)

The point is that in general there's only a few companies on the planet that make the actual optics, and often/usually you can get optics that are electronically and physically identical, because Extreme Networks buys theirs from (guessing here) Finisar. So it's extremely common for large networking organizations to buy an SFP programmer and use generic optics. Your current Internet traffic is, virtually guaranteed, going over at least some generic optics right now.

I find it simpler and cheaper just to buy used optics and test them. For example, the Dell gear we use here (actually a Force10 Networks design), I was finding the optics on eBay for about $20-$25 each, and at that price, you can't even get the generic optics new for that price, much less the cost of the programmer.

Usually there'll be some smart guys out there selling "private label" versions of the SFP's which are just preprogrammed to look legit. You could take a laser printer and make yourself some legit-looking teeny SFP labels and the only way an Extreme Networks engineer would be likely to notice would be if they started validating serial numbers. :)
I tried to explain that exact same point but at the end of the day I can't order anything without my manager signing off on the requisition form. I have found that most of the time, not always, but most, the person managing the datacenter team is the least qualified to be working in a datacenter.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
you're going to get a lot less when you factor in the NAS protocol, ZFS, etc.
True, but I've seen other members pointing out 8 and 9Gb/s.
I don't know... I'll have to do more testing. I'll have to connect directly to my desktop and see what I can pull from it.
But I do agree that my old NAS is likely out of steam. The same things that would make it lag (when it was in use) doesn't even make the new one so much as hiccup.

I found that the auto scale feature was a waste of time
What is this "auto scale" that you are referring to?

I need to knock some rough edges of my understanding and notes before I commit them to public scrutiny.
Oh! I hear that! Sometimes people can just be abrasive!
Well, if I can help I will help in any way possible. I'd like to learn more about this stuff too. My problem has always been if I have the time I don't have the money and if I have the money I don't have the time. Something I'm still trying to balance.
But yea, If you want someone to bounce ideas off of, help with something, shoot me a PM, maybe we can get together. I know I have much to learn.

I tried to explain that exact same point but at the end of the day I can't order anything without my manager signing off on the requisition form. I have found that most of the time, not always, but most, the person managing the datacenter team is the least qualified to be working in a datacenter.
You'll find this is the case most the time no matter where you go. Sometimes you'll find someone that believes and trust in their employees to make good decisions that are good for the company, but it's not often.


You know, I wish there was a way to run speed tests directly from the switch to specific devices. I'll have to look into it more, but there isn't a lot of documentation on the switch that I'm running. The Quanta LB4M. People still seem to be figuring it out, but it does seem to do what it's supposed to.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I tried to explain that exact same point but at the end of the day I can't order anything without my manager signing off on the requisition form. I have found that most of the time, not always, but most, the person managing the datacenter team is the least qualified to be working in a datacenter.

Incompetence gets promoted. You don't want them working on the gear, so it's the less dangerous place for them to be, especially if there's a bunch of clue underneath them mitigating the stupid.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
True, but I've seen other members pointing out 8 and 9Gb/s.
I don't know... I'll have to do more testing. I'll have to connect directly to my desktop and see what I can pull from it.

Yeah, I know, I haven't been able to think of any great suggestions for you at the moment, sorry.
 
Status
Not open for further replies.
Top