10 Gig Networking Primer

10 Gig Networking Primer

Joined
Mar 22, 2016
Messages
217
I went and bought the Intel X520-DA2 and DA1 cards and a 10 meter 50/125 LC-LC Aqua colored fiber cable. One purchase came with four Unix branded modules.

Transferring from Windows 10 desktop to Freenas server with a raidz1 4x4TB using CIFS.
1) Default adapter settings got me a solid 144MB/s transfer on a 9GB MP4 file.
2) Setting the adapters with Jumbo Frame to 9014 and transfer buffers maxed out i got up to 600MB/s on the same file.

I still need to fine tune the adapters on both ends so I'll post back later on results.

Wouldn't 600MB/s be max for a 4x4tb raidz1? Or are you sending it to another ran disk?


Sent from my iPhone using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sadly the problem there is that it'd take a good number of hours (~millions aggregate) to really deem that to be stable, and with all the relatively cheap 10G options out there these days, that's not too likely to happen. :-/
 

Borja Marcos

Contributor
Joined
Nov 24, 2014
Messages
125
Sadly the problem there is that it'd take a good number of hours (~millions aggregate) to really deem that to be stable, and with all the relatively cheap 10G options out there these days, that's not too likely to happen. :-/

They are working like a charm for us running NFS. Maybe they could be faster, but the uptime is higher than a year now, so that you know.
 

Jon K

Explorer
Joined
Jun 6, 2016
Messages
82
Hi all - looking to get a dual port SFP+ DA card because I have single port Mellanox SFP+ cards in two ESXi hosts. I want to bridge through my FreeNAS box so that I have a 3-node 10 GbE network that I'll use for NFS/vMotion.

I see recommendations of Chelsio but do the Mellanox SFP+ cards word? Specifically, the Mellanox Connect TX-2? http://www.ebay.com/itm/Mellanox-MN...432221?hash=item5d6579afdd:g:bhsAAOSwq19XBqax

Thanks all
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In theory the Mellanox cards are now supported due to some recent additions to FreeBSD, but the total number of hours on which this has been tested on actual FreeNAS systems is probably a lot closer to a thousand than a million, so you'd be on the discovering end of any problems. By comparison, the Chelsio cards are the only ones that have been consistently stable since... well, at least since FreeNAS 9 was introduced.

Generally speaking, I don't strongly encourage the use of the LSI 12Gbps HBA's because the aggregate hours they've got on them is maybe in the millions ballpark, whereas we know the LSI 6Gbps HBA's are up around the billion-plus hour mark. That's just saying that if you want a problem-free experience, your best chance of that is going with what's worked well so far. If you don't mind the potential for some bumps and issues, then the Mellanox stuff is an interesting choice, and definitely inexpensive. But be aware of the newness and the potential for issues up to and including completely not working or even panicking the system.
 

Jon K

Explorer
Joined
Jun 6, 2016
Messages
82
In theory the Mellanox cards are now supported due to some recent additions to FreeBSD, but the total number of hours on which this has been tested on actual FreeNAS systems is probably a lot closer to a thousand than a million, so you'd be on the discovering end of any problems. By comparison, the Chelsio cards are the only ones that have been consistently stable since... well, at least since FreeNAS 9 was introduced.

Generally speaking, I don't strongly encourage the use of the LSI 12Gbps HBA's because the aggregate hours they've got on them is maybe in the millions ballpark, whereas we know the LSI 6Gbps HBA's are up around the billion-plus hour mark. That's just saying that if you want a problem-free experience, your best chance of that is going with what's worked well so far. If you don't mind the potential for some bumps and issues, then the Mellanox stuff is an interesting choice, and definitely inexpensive. But be aware of the newness and the potential for issues up to and including completely not working or even panicking the system.

Excellent - I agree, I am all for running the more proven card. I saw X520's but people have had issues. I am leaning toward the Chelsios - do you have a PN# or model# that's recommend for a dual port? No issue bridging right? It'll be a switchless 10 GbE between 3 nodes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Nobody's had issues with the X520 that I know of since the driver fix for that went in quite some time ago (year? two years?). There's some possibility that the X520 might need a config tweak or two to make it work properly, but nothing serious.

The Chelsio T420-CR and T520-CR are both awesome dual port cards that work out of the box.

I can't actually make any promises regarding the bridging thing because it's quite hacky and most people have just been buying switchgear. However, I can tell you that it is expected to work, and if not, I have extensive experience with bridging and other networking muck in FreeBSD and I'll be happy to see if I can help straighten any issues out.
 

Jon K

Explorer
Joined
Jun 6, 2016
Messages
82
Nobody's had issues with the X520 that I know of since the driver fix for that went in quite some time ago (year? two years?). There's some possibility that the X520 might need a config tweak or two to make it work properly, but nothing serious.

The Chelsio T420-CR and T520-CR are both awesome dual port cards that work out of the box.

I can't actually make any promises regarding the bridging thing because it's quite hacky and most people have just been buying switchgear. However, I can tell you that it is expected to work, and if not, I have extensive experience with bridging and other networking muck in FreeBSD and I'll be happy to see if I can help straighten any issues out.

Awesome jgreco thanks a ton for your support. I will check into T420-CR and T520-CR and see how it goes. I have experience with X520-T2 but thats the 10GbaseT. I think I'd rather match the most supported hardware on the FreeNAS side of things for my testing. I'd like to pursue bridging on the 10 GbE as it makes vMotion of 8 - 32GB RAM machines nice. I can do a LACP for vMotion on another DVS if I had to, but yeah.. Thanks again jgreco I'll pull on a dual port and follow up. PS - iSCSI MPIO w/ zvol is setup and RIPPING. I am going to pick up a S3710 or similar soon, but man, IOPS for everyone! I do understand that iSCSI is only sync'd for metadata and the actual block data is async.
 
Joined
Feb 2, 2016
Messages
574
We use Intel x520-da2 cards with both 9.3 and 9.10. They are flawlessly reliable with more than a year of usage on 9.3 and a few months on 9.10.

We have done no performance tuning. Performance is good enough - better than the 1GB x 4 LACP we had in place previously - so we elected to stick with the stock configuration. I'm sure we're leaving some bits on the table but we're not well enough versed in BSD network tuning to start twirling knobs.

Cheers,
Matt
 

paulatmig

Dabbler
Joined
Jul 14, 2014
Messages
41
My only trouble is trying to find the right kern.ipc.nmbclusters setting (right now at 16773104). Every 20-30 days the system hangs when the networking components crash (run out of memory), and having 16M allocated for two T420's (using both ports on each) doesn't seem to be enough. Or something. I submitted a bug for this problem and hope to get some guidance, but curious if anyone else had or has the same problem.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
My only trouble is trying to find the right kern.ipc.nmbclusters setting (right now at 16773104).
kern.ipc.nmbclusers increases the number of network mbufs the system is willing to allocate. Each cluster represents approximately 2K of memory, so a value of 524288 represents 1GB of kernel memory reserved for network buffers. So, you've allocated 16773104 X 2048 = ~ 32GB of kernel memory. I'm guessing this is why your system is going catatonic.
 

paulatmig

Dabbler
Joined
Jul 14, 2014
Messages
41
What's odd is that's the value assigned by autotune, so I'm figuring autotune is just going a little overboard in considering that number.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, autotune doesn't work as well as people like to think.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
What's odd is that's the value assigned by autotune, so I'm figuring autotune is just going a little overboard in considering that number.
I don't know, I've never used autotune. You should reduce that value to something more sane for your hardware/workload and see if your system is more stable.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Yes, autotune doesn't work as well as people like to think.
Wasn't CJ going to do some work on auto tune to make it suck less?!? I've seen it hamstring a persons performance before but never make it hang. Though, I don't comment much on auto tune posts.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Wasn't CJ going to do some work on auto tune to make it suck less?!? I've seen it hamstring a persons performance before but never make it hang. Though, I don't comment much on auto tune posts.

No, Cyberjock lacks the familiarity. That was me. I've got extensive FreeBSD tuning experience in much harsher network environments than this, but I've been busy with actual work work.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
No, Cyberjock lacks the familiarity. That was me. I've got extensive FreeBSD tuning experience in much harsher network environments than this, but I've been busy with actual work work.
Oops my apologies.

I found another thread from paulatmig talking about his server crashing and you pointed out the nmbclusters size issue. Looks like he didn't change it from back then. He mentioned having 262GB of memory in the server.

Edit: he first said 192GB then 262GB. So who knows...

Here is the thread:
https://forums.freenas.org/index.php?threads/sudden-loss-of-network-at-3am.39777/#post-279746
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oops my apologies

No worries. Cyberjock's a quick study and if he wanted to, I'm pretty sure he could take a day or three and do a deep dive and figure out how to make some substantial optimizations.

The real problem is that memory size isn't really the only factor in optimization, but also things like whether you have 1G or 10G, crappy network cards, etc.
 

paulatmig

Dabbler
Joined
Jul 14, 2014
Messages
41
Oops my apologies.

I found another thread from paulatmig talking about his server crashing and you pointed out the nmbclusters size issue. Looks like he didn't change it from back then. He mentioned having 262GB of memory in the server.

Edit: he first said 192GB then 262GB. So who knows...

Here is the thread:
https://forums.freenas.org/index.php?threads/sudden-loss-of-network-at-3am.39777/#post-279746

Yeah, started off with 192Gb, then upgraded to 262Gb of load-reduced memory.
 
Top