X520 speed issue

Status
Not open for further replies.

TheDubiousDubber

Contributor
Joined
Sep 11, 2014
Messages
193
That is totally understandable, however, it is useful to know this stuff for those who are looking to upgrade a workstation to 10G. Expectations of equipment working out of box is a different story. I do know that Mellanox does have recently updated drivers specifically for Win10. Haven't been able to test yet though.
 
Joined
Mar 22, 2016
Messages
217
Any recommendations for the workstation environment, specifically Windows 10?
 

TheDubiousDubber

Contributor
Joined
Sep 11, 2014
Messages
193
So I just got the Mellanox card setup and right off the bat its getting around 5.6Gbps using iperf. Hopefully after some tweaking I can get some better numbers. I tried installing the older driver after installing the card, but it kept failing. I ended up going to the Mellanox website and coincidentally they had recently released a driver for Win10.

Edit: Scratch that just realized support was dropped for connectx-2, driver only supports 3 and 4. Though it installs just fine and works with the x-2.
 
Last edited:
Joined
Mar 22, 2016
Messages
217
Looks like the Chelsio T4/520 has drivers for Windows 10 as well, or at least that's what I'm gathering from: http://service.chelsio.com/downloads/Microsoft/

The Mellanox connectx-2 is significantly cheaper than the Chelsio cards.
 

TheDubiousDubber

Contributor
Joined
Sep 11, 2014
Messages
193
All in all, this was not a worthwhile endeavor. I ended up bricking the Mellanox card in the process of messing around with it. Not a big deal seeing as it was only $15. I put the x520 back in, I set up the network with a newly acquired Dell X1052 and I get a whopping 2+Gbps (topping out around 2.89) I switched over to the Fedora installation and it doesn't appear to change speeds as I saw before when testing with them directly connected. I'm clearly in way over my head. I'm a networking newb so trying to deal with a fully managed switch is interesting. Trying to deal with crap speeds that are roughly 25% of what they should be is incredibly annoying. The only consolation is that 2Gbps is still better than I'd get when maxing out my old 1GbE.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The managed switch should be no different than an unmanaged switch as long as you don't do anything to it. If in doubt, clear the configuration, reload, and then leave it alone.
 

TheDubiousDubber

Contributor
Joined
Sep 11, 2014
Messages
193
The managed switch should be no different than an unmanaged switch as long as you don't do anything to it. If in doubt, clear the configuration, reload, and then leave it alone.

It was throwing out errors, but turns out they are inconsequential in regards to my setup. Switch settings have mostly been left alone.

I'm still stuck at square one though. I can't seem to get above 3Gbps with Windows or Linux client.

Win10/Fedora > x520 > om3 > x1052 > om3 > x520 > FreeNAS = 3Gbps max

Are there any kind of settings in FreeNAS that can be tweaked or is it more or less plug 'n' play? I also thought it could be an issue with the optics, but I was under the impression they either work or they don't, and they do.

Edit: tweaked some settings. Put RSS Queues to 4 (running an i5 so I don't have 8 logical cores, which is default setting). Jumbo on = 4.2Gbps win10 client > freenas server, but 1.3 in the other direction. Jumbo off = 2.3Gbps win10 client > freenas server, but 3.4Gbps when using freenas as client and win10 as server.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That is totally understandable, however, it is useful to know this stuff for those who are looking to upgrade a workstation to 10G. Expectations of equipment working out of box is a different story.

Sure, but at some point you have to look at hardware compatibility when picking stuff, especially when you're reliant on Windows drivers...

I'm still stuck at square one though. I can't seem to get above 3Gbps with Windows or Linux client.

Win10/Fedora > x520 > om3 > x1052 > om3 > x520 > FreeNAS = 3Gbps max

Are there any kind of settings in FreeNAS that can be tweaked or is it more or less plug 'n' play? I also thought it could be an issue with the optics, but I was under the impression they either work or they don't, and they do.

What you want to do to have a better idea here is to check for errors. On a FreeBSD box, you can run "netstat -h 1" and observe for errors or drops. Also helpful to look at top to see if there's particularly heavy system/interrupt loading. On a switch, there'll be a command to look at the counters, probably something under "sho int count ten" (don't have the command ref for the x1052 handy but just troll around). You shouldn't see error counts. If you do, clear the counters and then monitor for additional errors. It's *possible* but unlikely for there to be errors.

Edit: tweaked some settings. Put RSS Queues to 4 (running an i5 so I don't have 8 logical cores, which is default setting). Jumbo on = 4.2Gbps win10 client > freenas server, but 1.3 in the other direction. Jumbo off = 2.3Gbps win10 client > freenas server, but 3.4Gbps when using freenas as client and win10 as server.

Oh, you're running that on a slow core Nehalem setup?

Um.

See, this gets complicated and weird as things go suboptimal. That Nehalem's a seven year old first try at eliminating the FSB; Intel didn't do real well with Nehalem. My suspicion is that you don't have enough oomph to push that much that fast, and especially with a slow old dual CPU setup that is likely to be a significant factor in what is hurting you. With both sockets and all eight cores and a tailwind, that thing probably Geekbenches around 9800, with each core peaking at maybe 1600. By comparison, the single CPU E3-1230 (v1) Sandy Bridge from just two years later will easily shove 3000 per core with 11000+ overall. This is favorable when doing intensive I/O because it means there's less shuttling things around the system, and for singlethreaded applications like Samba, the 3000 beats the 1600 every time. I'm sure you got a fantastic deal on that E5530 box but there was a reason for that, and it wasn't that someone wanted to give you a deal. They wanted to pawn off their old slow crap on you and recover some cash.

Latency works against you, and this can be somewhat configured around if that's what's hurting you. Because your system probably responds more slowly to the network, and your network is 10x faster than the typical Nehalem NAS, you definitely need to look towards tuning if you'd like to get more out of it. Both increasing buffer sizes and also possibly shifting to an alternative congestion control algorithm, as recently discussed in a recent wifi thread, are potential optimizations. What's happening might at first seem dissimilar to the wifi thread, in that the network itself is very slow in the wifi thread, but from TCP's point of view, it cannot really differentiate between slowness of the host (introducing latency) and slowness of the network (introducing latency). So I think I'd try making some adjustments from that point of view and see if things got better.
 

TheDubiousDubber

Contributor
Joined
Sep 11, 2014
Messages
193
I understand Nehalem setup is less than ideal, but the very same hardware managed to transfer between 5-6Gbps in testing the ConnectX-2 in the workstation with Windows/Fedora >> FreeNAS (direct connect). Prior to that I managed 9+Gbps from Fedora >> FreeNAS on the X520 but stuck at 1+ on Windows (also direct connect). Now with the switch in the mix I'm getting different speeds again. Though oddly enough, the same speeds on both Windows and Fedora.

I did notice a number of errors on Dell switch. Links are dropping, followed by STP status Forwarding. So something is going on there. Not sure if its a symptom of the same problem.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It doesn't matter if the very same hardware managed to do ${X}. I can make your server appear to go faster by scrapping ZFS and going to UFS. Or by throwing it out a tenth story window. That's not useful or helpful here. The problem is, the speed you will get is VERY dependent on a lot of things working very well simultaneously. The business of NAS is very much dependent on not having weak links. Any weak link kills performance.

I had some great fileservers back in 2005 that could pump out gigabit without a problem. Put ZFS and FreeNAS on them, and they dropped to a quarter of the speed. ZFS itself is a massive CPU pig. The intention with ZFS is that you're going to replace insanely expensive specialized RAID silicon with general purpose CPU silicon, so a ZFS filer will take a lot more CPU to accomplish the task.

You are much closer to that situation with my 2005-era fileservers than you are to modern gear. In order to be successful with ZFS, you may have to get over that mental block and recognize that the platform itself could certainly be an impediment.
 

Stryf

Dabbler
Joined
Apr 3, 2016
Messages
19
My results are similar to OP's later posts. Windows 10 defaults to version 4 something drivers. I then installed the Windows 8.1 64bit v21.0 drivers which provides more advanced settings and performance gains.

Windows 10 64bit (FX-8320 4.6GHz) with Intel x520-DA1 >> OM3 multi-mode LC-LC 10 meters << Intel x520-DA2 with Freenas (E3-1220 v3 @ 3.10GHz)

Win10 = 1x Samsung 850 Evo 256GB
FreeNAS = raidz1 pool using 4x 4TB spinning disks

The default adapter settings on the client side were giving me CIFS transfer speeds at around 150-160MB/s for both the download and upload. Using the below modified settings my downloads were still the same but the upload is now at 400-500MB/s, which seems to be about right for a raidz1 setup.

Here is my adapter settings on the Win10 side:
Code:
Interrupt Moderation = Disabled
Jumbo Packet  = 9014
Large Send Offload V2 (IPv4)  =    Enabled
Large Send Offload V2 (IPv4)  =  Enabled
Locally Administered Address =     Null
Log Link State  Event =     Enabled
Maximum Nuber of RSS Queues  =    8 Queues 
Offloading Options
    IPSec Offload                  =    Both Enabled
    IPv4 Checksum Offload        =    Disabled
    TCP CheckSum Offload (IPv4) =    Disabled
    TCP CheckSum Offload (IPv6) =    Disabled
    UDP CheckSum Offload (IPv4) = Disabled
    UDP CheckSum Offload (IPv6) = Disabled
Packet Priority & VLAN = Both Enabled
Performance Options
    Flow Control = Both enabled
    Interrupt Moderation Rate = Low
    Low Latency Interrupts = Null
    Receive Buffers  =     4096
    Transmit Buffers = 16384
Receive Side Scaling =    Enabled


I'm looking into optimizating the FreeNAS side now.
Not sure if these Unix UP-FTLX8571D3BCV-IT modules are the cause ???
Maybe my Win10's SSD is bottle-necked at 150MB/s ???
 

TheDubiousDubber

Contributor
Joined
Sep 11, 2014
Messages
193
I haven't had much time lately to continue messing around with settings, but in the time I had I messed around a lot and didn't get very far. Currently I average around 300-350MB/s throughput. Mainly in transferring large video files from FreeNAS via SMB -> Win10 Client. I have seen as high as 450MB/s at times. In other words, your SSD is not the bottleneck as I am also running a single drive 850 EVO on my client. As far as my current settings go:

Interrupt Moderation: Enabled
Jumbo Packet : Disabled (I'm not sure why you would need this)
Large Send Offload IPv4 and IPv6: Enabled
Log Link: Enabled
Max Number of RSS Queues: 1 Queue (resulted in higher performance than multiple queues)
Offloading Options: Defaults (All Enabled)
Performance Options:
Flow Control: Enabled
Interrupt Mod Rate: Adaptive
Low Latency Interrupts: Use for packets with TCP PSH flag
Receive Buffers: 512 (Default)
Transmit Buffers: 512 (Default)
Receive Side Scaling: Enabled

It appears you changed a lot of the settings away from default, you possibly know more than I do, but I'm not sure why they were changed. I had previously found that raising the buffers had a significant performance increase, but after trying other adapters, then starting over with the x520, it did not help the second time around. The most significant increase I saw was in lowering the amount of queues. From my (limited) understanding this is directly linked to CPU utilization. I'm not sure how all this fits together, but I'm guessing that multithreading isn't working which is why a single queue was best in my situation. I imagine whatever you specific hardware setup is, could show different results. This whole thing is a bit confusing, and I don't understand a lot of what is being done. Though I suggest sticking to defaults and changing one thing at a time to see if it helps, if there is no noticeable difference then why change it?
 

Stryf

Dabbler
Joined
Apr 3, 2016
Messages
19
Update #2
I yanked my x520-DA2 out of freenas and stuck it in another Win10 PC. Using Ram Disk on both Win10 boxes, the send and receive file transfers were a solid 275MB/s and iperf was 2.37Gb/s. I'm starting to think Win10 is the major limitation so my next post will be on Win10 tuning, if any can be found. Maybe throw in a debian/ubuntu live CD to try as well.

Update #3
I put the x520 back into FreeNAS and found a post by RAND, in the 10G Primer, who has an x550 and the same 16GB of RAM as me but has a better cpu. His ifconfig code increased my iperf from 3.26Gbits/s to ~7.76Gbits/s. CIFS is still 150-160MB/s download but now the upload is a solid 590MB/s. The uploads were 400-500MB/s before I added the tunables and ifconfig setting that Rand stated.

Win10 = iperf -c 192.168.1.200 -r
FreeNAS = iperf -s

Results from FreeNAS terminal:
Code:
[root@FreeNAS] ~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.200 port 5001 connected with 192.168.1.100 port 59006
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  9.04 GBytes  7.76 Gbits/sec
------------------------------------------------------------
Client connecting to 192.168.1.100, TCP port 5001
TCP window size: 2.00 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.200 port 18347 connected with 192.168.1.100 port 5001
Waiting for server threads to complete. Interrupt again to force quit.
[  4]  0.0-10.0 sec  1.58 GBytes  1.36 Gbits/sec
 

Stryf

Dabbler
Joined
Apr 3, 2016
Messages
19
Update #4
I've doubled my download speed on Win10 to 330MB/s and increased upload to about 790MB/s by adding this to the tunables (net.inet.tcp.tso = 0 | sysctl). Turning off TSO for the x520 causes poorer iperfs results.

ifconfig ix0 mtu 9014 rxcsum txcsum tso4 lro

Tunables added to Loader that may have helped:
hw.igb.rxd = 4096
hw.igb.txd = 4096
hw.ix.flow_control = 0
hw.ix.rx_process_limit = 512
hw.ix.rxd = 4096
hw.ix.tx_process_limit = 512
hw.ix.txd = 4096


CIFS:
Freenas >>> Win10 = 330MB/s
Win10 >>> Freenas = 790MB/s (after 80% it slopes down to 300)

Code:
[root@FreeNAS] ~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[  4] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 58112
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  7.50 GBytes  6.43 Gbits/sec
[  5] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 58263
[  5]  0.0-10.0 sec  7.78 GBytes  6.68 Gbits/sec
[  4] local 10.0.0.1 port 5001 connected with 10.0.0.2 port 58399
[  4]  0.0-20.0 sec  15.6 GBytes  6.68 Gbits/sec
^C[root@FreeNAS] ~# iperf -c 10.0.0.2 -i 2 -t 20
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size: 2.00 MByte (default)
------------------------------------------------------------
[  3] local 10.0.0.1 port 12971 connected with 10.0.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec   628 MBytes  2.63 Gbits/sec
[  3]  2.0- 4.0 sec   632 MBytes  2.65 Gbits/sec
[  3]  4.0- 6.0 sec   627 MBytes  2.63 Gbits/sec
[  3]  6.0- 8.0 sec   632 MBytes  2.65 Gbits/sec
[  3]  8.0-10.0 sec   632 MBytes  2.65 Gbits/sec
[  3] 10.0-12.0 sec   626 MBytes  2.63 Gbits/sec
[  3] 12.0-14.0 sec   631 MBytes  2.65 Gbits/sec
[  3] 14.0-16.0 sec   627 MBytes  2.63 Gbits/sec
[  3] 16.0-18.0 sec   632 MBytes  2.65 Gbits/sec
[  3] 18.0-20.0 sec   632 MBytes  2.65 Gbits/sec
[  3]  0.0-20.0 sec  6.16 GBytes  2.64 Gbits/sec
 
Joined
Mar 22, 2016
Messages
217
I should be receiving my fiber from fs.com today. With that being said, I will start contributing my findings with Win10 and my Freenas box.

The Freenas box currently set up as:
X10SRL-F
E5 1650 V3
128GB Ram
LSI 9207
Running two pools. One is a 12 drive mirrored vdev configuring (HGST 7200 RPM drives) and the other pool is a 4 SSD mirrored vdev config.

I'm hoping that hardware should be able push 10Gb with no issues.

One workstation has two E5 2675V3's installed, while the other two work stations are on i7 5960's.

Right now myself and the other editors have it pegged at 105MB/s on the gigabit connection when transferring projects around. I can't wait to get the 10Gb set up. Moving 200+ GB files around regularly on 1Gb connections are painful.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
A nahelem should be able to do 10Gb, assuming you don't end up bottlenecked somewhere else. I have the lowest end 1366 CPU in my test rig (2.0Ghz I believe) and it can saturate 10Gb.

I'm thinking something with your network infrastructure is to blame, but you're on your own to figure out what. :/
 
Joined
Mar 22, 2016
Messages
217
Code:
[root@freenas] ~# iperf -c 192.168.2.155 -i 2 -t 20
------------------------------------------------------------
Client connecting to 192.168.2.155, TCP port 5001
TCP window size: 35.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.165 port 46047 connected with 192.168.2.155 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec   760 MBytes  3.19 Gbits/sec
[  3]  2.0- 4.0 sec   764 MBytes  3.20 Gbits/sec
[  3]  4.0- 6.0 sec   762 MBytes  3.19 Gbits/sec
[  3]  6.0- 8.0 sec   758 MBytes  3.18 Gbits/sec
[  3]  8.0-10.0 sec   759 MBytes  3.18 Gbits/sec
[  3] 10.0-12.0 sec   762 MBytes  3.19 Gbits/sec
[  3] 12.0-14.0 sec   752 MBytes  3.15 Gbits/sec
[  3] 14.0-16.0 sec   759 MBytes  3.18 Gbits/sec
[  3] 16.0-18.0 sec   757 MBytes  3.18 Gbits/sec
[  3] 18.0-20.0 sec   751 MBytes  3.15 Gbits/sec
[  3]  0.0-20.0 sec  7.42 GBytes  3.18 Gbits/sec

Well needless to say I'm exactly where Styrf is. With everything default these are the exact numbers I was getting to the NAS. When I added the tunables it either hurt the performance to the NAS or left it were it was. When I added the tunables, it killed the reads from the NAS.

Code:
C:\Users\Dillon\iperf>iperf.exe -c 192.168.2.165 -i 2 -t 20
------------------------------------------------------------
Client connecting to 192.168.2.165, TCP port 5001
TCP window size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.155 port 62539 connected with 192.168.2.165 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 2.0 sec   376 MBytes  1.58 Gbits/sec
[  3]  2.0- 4.0 sec   372 MBytes  1.56 Gbits/sec
[  3]  4.0- 6.0 sec   372 MBytes  1.56 Gbits/sec
[  3]  6.0- 8.0 sec   374 MBytes  1.57 Gbits/sec
[  3]  8.0-10.0 sec   377 MBytes  1.58 Gbits/sec
[  3] 10.0-12.0 sec   369 MBytes  1.55 Gbits/sec
[  3] 12.0-14.0 sec   362 MBytes  1.52 Gbits/sec
[  3] 14.0-16.0 sec   362 MBytes  1.52 Gbits/sec
[  3] 16.0-18.0 sec   370 MBytes  1.55 Gbits/sec
[  3] 18.0-20.0 sec   370 MBytes  1.55 Gbits/sec
[  3]  0.0-20.0 sec  3.62 GBytes  1.55 Gbits/sec


And despite this....which would be a read from the NAS, I was able to hit 1GB reads from it....things are getting weird.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, since your iperf isn't doing 6Gb/sec+ (which is what I'd expect.. closer to 8Gb/sec typically) you definitely have network issues to resolve.
 

Stryf

Dabbler
Joined
Apr 3, 2016
Messages
19
I'm at work so i don't have the exact numbers but i had improvements after I used SpeedGuide.net TCP Optimizer for Win10 client. Within the program i changed the dial to the far right and checked off the optimize dot and gave it a reboot. I think it disables Win10 auto-tuning, heurtics (spell check) and TCP Chimney Offload. On windows 10 client i used "iperf.exe -c 10.0.0.1 -i i 2 -t 10 -d" so it goes both ways on the transfer and i got ~2.5gbps down and 9.5gbps up. File transfer is 300MB/s download and 1GB/s upload. The strange thing is if I do the same iperf setting on the freenas side its about the same number on the up and down (around 2 to 2.5gbps). The HTOP was 25% cpu on all four cores. It's starting to feel like a Windows 10 issue. I'll be installing freenas (and then ubuntu) on the windows 10 computer using a thumb drive and then do another iperf test.

Oh, and my brother is buying me 16GB more ram to max the mobo out.
 
Joined
Mar 22, 2016
Messages
217
I'm going to try to first rule out any bad connections or parts then if that fails move to the program. I'm no network engineer but I'm going to try to learn and solve the issues.


Sent from my iPhone using Tapatalk
 
Status
Not open for further replies.
Top