Network Adapter Question

davidjt

Dabbler
Joined
Oct 28, 2020
Messages
13
Any idea if the move to Scale will change compatibility for network cards? I'm interested in NVIDIA MCX512A-ACUT ConnectX-5 EN Adapter Card. I see that the X-4 cards worked in Freenas. Wanted to try to future proof a bit with a 25Gb adapter instead of 10Gb.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Any idea if the move to Scale will change compatibility for network cards? I'm interested in NVIDIA MCX512A-ACUT ConnectX-5 EN Adapter Card. I see that the X-4 cards worked in Freenas. Wanted to try to future proof a bit with a 25Gb adapter instead of 10Gb.
THe change is from CORE=FreeBSD 12.x to SCALE=Debian Linux. In general, Linux has better hardware support. However, SCALE does not have as much test time or systems in field as CORE. So better support in the longer term, but some bugs/surprises in the short term.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I can at least confirm connect-x3 Eth based cards for you, working since earliest alpha :)
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
I cannot get Truenas Core TrueNAS-13.0-U1.1 to work with my Intel 10G cards. Freenas worked fine with them, but only If I had an Intel branded optical transceiver. My Solarflare card worked fine in Core. Has anyone else found that issue with Core? I thought I had broken the cable. Purchased more cables, no joy. I put another system on the cable and it was fine. I had to install a Solar flare card in the server to get it up on the network. I do not think the Solar flare card was supported in Freenas. I am confused about my network.
 
Last edited:

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
I am still having trouble with multiple 10Gb ports being setup properly on Intel and SolarFlare cards using TrueNAS-13.0-U3.1. The drivers seem to install OK but the links never come up. Only one of the ports works properly and will establish a link to another machine, I have Mellanox X-2 and X-3 Cards that work fine. Anyone else having this issue. I was successful in setting up a bridge using 3 Dual-10Gb Mellanox 10Gb cards.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I am still having trouble with multiple 10Gb ports being setup properly on Intel and SolarFlare cards using TrueNAS-13.0-U3.1.
You're using CORE, but posting in the SCALE forum, which isn't likely to get you much help. Suggest starting your own thread in an appropriate forum.
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
You're using CORE, but posting in the SCALE forum, which isn't likely to get you much help. Suggest starting your own thread in an appropriate forum.
Oops. I’ll be more careful. I just searched for 10Gb/s issues. Different world I get it. Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Intel and SolarFlare

This is also grievously vague; certain cards in these families are known to work well (Intel X520, X710, probably also SFN 5122, SFN 6122) while others are associated with a higher rate of problem reports (X540, X550, SFN 5161T, etc.)

The Intels will generally be SFP-locked to Intel branded SFP+'s, though there is a driver flag that may disable this behaviour. It's smarter just to feed it the Intel SFP+'s which cost an average of about $10/ea on eBay.

In general, I'm a little suspicious of the Solarflare cards and lots of other "budget" cards such as Mellanox where stuff like TSO/LRO, which are probably necessary on most platforms to support full 10Gbps, are not as extensively worked over and tuned as the Intel or Chelsio drivers, which have a LOT of love and care invested in them by their respective manufacturers. This becomes more complicated where you need stuff like VLAN tagging offload or bridging support as well; some cards that work fine for pure NAS do not work as well once you need VM's and jails due to bridging's unique needs. The Intels, for example, need certain offload features to be disabled for bridging, but work best with offload enabled the rest of the time. 10G and beyond is not a thing where you can just take whatever random crap you found on eBay and expect it to work swimmingly well in every case. There's a huge amount of community knowledge buried away in the 10 Gig Networking Primer about many of these issues, but it is admittedly difficult to find.
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
This is also grievously vague; certain cards in these families are known to work well (Intel X520, X710, probably also SFN 5122, SFN 6122) while others are associated with a higher rate of problem reports (X540, X550, SFN 5161T, etc.)
My intel cards are X520.
My Mellanox cards are: Dell Y3KKR Mellanox Connect X-3 (won't work in HP Z400, but works in other MB), and I have some X-2 cards that work well. I have used them with bridges but not VMs yet.
My Solar Flare cards are:
Solar Flare 107221 SF-107221 ISS Dual Port
My switch that does not factor into my question, but it is an ARUBA 2500 that has 4 10Gb/s ports. (I originally planned to use a bridge and multiple cards) I only recently got that working with the Mellanox cards. (in CORE not SCALE) I have been afraid to try SCALE because I wanted bridge support and it is hard to tell if that works. It is not in the Primer.
The Intels will generally be SFP-locked to Intel branded SFP+'s, though there is a driver flag that may disable this behaviour. It's smarter just to feed it the Intel SFP+'s which cost an average of about $10/ea on eBay.
I found out the hardware that yes for optical cables they want intel transceivers, but any DAC seems OK.
In general, I'm a little suspicious of the Solarflare cards and lots of other "budget" cards such as Mellanox where stuff like TSO/LRO, which are probably necessary on most platforms to support full 10Gbps, are not as extensively worked over and tuned as the Intel or Chelsio drivers, which have a LOT of love and care invested in them by their respective manufacturers. This becomes more complicated where you need stuff like VLAN tagging offload or bridging support as well; some cards that work fine for pure NAS do not work as well once you need VM's and jails due to bridging's unique needs. The Intels, for example, need certain offload features to be disabled for bridging, but work best with offload enabled the rest of the time.
The offload could be the issue with the bridging I am trying to do. I'll just use the Mellanox, they just seem to work.
10G and beyond is not a thing where you can just take whatever random crap you found on eBay and expect it to work swimmingly well in every case.
I purchased Mellanox cards because when I started on this journey in 2019 that seemed to be the recommended brand and had a lot of traction. Now it seems absent from the primer. Chelsio and Intel seem the ones in vogue now. The Solare Flare had easy support in Windows. I suppose it is a collection of what I can afford. The intel cards were a gift.
There's a huge amount of community knowledge buried away in the 10 Gig Networking Primer about many of these issues, but it is admittedly difficult to find.
The 10 Gig Networking Primer was useful, It is one of the reasons I thought it would be interesting to explore. I chose SFP+ instead of copper based on cost and availability. Some of the caveats in your message are not identified there.
  • Enhanced support for VM's with VLAN tagging, TSO/LRO for example. I only have a couple VM's but I'd like to know if I am buying a dead end.
  • And trouble reports on particular models. Identifying support in SCALE and CORE
Thanks for the help. I'll try to be clearer about the issue in the future and get my signature/spoiler updated. I'll also look at which CORE vs SCALE thread I'm in. Seems like there is and should be common challenges.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Solar Flare 107221 SF-107221 ISS Dual Port

I think that's a SFN6122F, maybe. Solarflare has a lot of part numbers and I don't have a magic decoder ring (sorry).

It is not in the Primer.

True. The 10G Primer was written in the days before SCALE, so it is highly focused on FreeBSD. However, the advice for FreeBSD is typically also applicable to Linux.

any DAC seems OK.

DAC are typically much more forgiving on a vendor-locked SFP+ system, but it comes with the annoying price of you being locked into a fixed length. If you someday need a longer cable, you are back at square one, which is why I encourage fiber. Nothing wrong with DAC if it suits you though.

The offload could be the issue with the bridging I am trying to do. I'll just use the Mellanox, they just seem to work.

It's a frustrating compilation of different device and driver quirks, in my experience. Different drivers have various interactions with the system, offload, and bridging capabilities.

I purchased Mellanox cards because when I started on this journey in 2019 that seemed to be the recommended brand and had a lot of traction. Now it seems absent from the primer. Chelsio and Intel seem the ones in vogue now

The 10G Primer has never advocated Mellanox; I've always felt their cards are too complicated for beginners, with the different personalities and arcane setup tools. The Chelsios have always been the favorite because that's what iXsystems provided with TrueNAS systems back in the day, and once some driver crash issues were resolved, Intel X520 joined them as an option as well. I still feel that the X520 is one of the best "beginner" cards because of the lack of complicated firmware issues and the fact that support for the 82599 is ubiquitous.

Some of the caveats in your message are not identified there.

This is true. The 10G Primer is nearly a decade old and was written at a time when stuff like bridging and VM support didn't exist and wasn't a concern. The real problem is that even were I to rewrite it (and there's some pressure to do so, updating it for 25G/40G/etc) I really do not have a comprehensive list of test results for stuff like VLAN tagging support. Some stuff like TSO/LRO you can just make an intelligent guess "yes that should be disabled with bridging" but properly testing the rest really requires a testing environment that I don't necessarily have, so I tend to get a little handwavey in place of absolute fact.
 
Top