Terrible performance with 10gigE.

Status
Not open for further replies.

Alan Latteri

Dabbler
Joined
Sep 25, 2015
Messages
16
Hello,

I am getting max 240MB/s reads over NFS with 10gigE. All the iperf check out at almost line rate. To eliminate a possibility of disk bottleneck, I created a single stripe on a 400 GB Intel 750 NVMe, which has around 2000MB/s throughput.

Dell T630, 128 GB ram, 2 x quad 3.0 xeon, Perc 730 in HBA mode, Intel x520 network card direct connect cable to Dell X4012 switch. Client is using same network card, optically connected to switch. iperf between the 2 is line rate.

I've tried with/without auto tune. Latest 9.3 stable and nightly. w & w/o compression. NFS 3 & 4. Standard and 1M record sizes. Added more NFS threads. MTU 9000. I've tried adding -o rsize=1048576 to the NFS mount but it always sticks to rsize=131072. The speeds never goes above 240 MB/s.

If I use the same setup, same hardware but running Centos 7, I get line speeds, around 950 MB/s.

Something is up with FreeNAS. Please help, spent the past week trying to getting this working smoothly in FreeNAS.

Client side mount:
192.168.1.12:/mnt/beast/test/ /mnt/test nfs4 rw,relatime,vers=4,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.100,minorversion=0,local_lock=none,addr=192.168.1.12 0 0

Attached are the tunables.
 

Attachments

  • Screen Shot 2015-10-01 at 2.18.48 PM.png
    Screen Shot 2015-10-01 at 2.18.48 PM.png
    57.7 KB · Views: 364
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
CentOS is not really a good comparison to FreeNAS. Try the test using FreeBSD with ZFS. That will give you a more apples to apples test.

My guess is it's a problem with the ZFS overhead, or the FreeBSD drives for the x520. A test with FreeBSD will either rule this out or confirm it.
 

Alan Latteri

Dabbler
Joined
Sep 25, 2015
Messages
16
I have installed ZFSguru 10.3, imported the same pool and done the same time. Only slightly better results. There must be something wrong with the driver for the network card in BSD. I'm going to try a Mellanox card tomorrow.
 

Alan Latteri

Dabbler
Joined
Sep 25, 2015
Messages
16
Thanks for Josh for the recommend of adding Tunable. sysctl hw.ix.enable_aim=0
My speeds are now 600-700 MB/s. Damn Intel X520 cards just sucks hard with FreeBSD. Don't use it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Just a note that hw.ix.enable_aim=0 doesn't always improve performance. :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I thought Intel had good drivers?

Intel's drivers are fine, but as with anything nearer the bleeding edge, some tuning may be required. We had all these same sorts of issues fifteen years ago with gigabit Ethernet.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I thought Intel had good drivers?
According to jpaetzel, the FreeBSD driver is actually holding the cards back, even though the hardware itself is top-notch. Not as much work goes into it as with the Windows or Linux drivers.
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
Thanks for Josh for the recommend of adding Tunable. sysctl hw.ix.enable_aim=0
My speeds are now 600-700 MB/s. Damn Intel X520 cards just sucks hard with FreeBSD. Don't use it.
Could you post what you did of adding that Tunable? I am having a similar issue with Myricom 10 Gbe Cards.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's a driver tunable for the Intel card. It has nothing to do with Myricom cards. You can find out more about Myricom's sysctl tunables in the man page.

https://www.freebsd.org/cgi/man.cgi?query=mxge&sektion=4

It appears to be a fairly old driver, so it is probably missing a lot of modern optimizations and the hardware is probably limiting as well.
 
Status
Not open for further replies.
Top