Mellanox cards with core 13

jcizzo

Explorer
Joined
Jan 20, 2023
Messages
79
Hey all, i'm looking for experience with the mellanox cards and latest version of core 13.
does anyone have anything to say about them? specifically the cards that run at 40Gb?
all my research leads me to articles that go back years ago and the reliability seems all over the place and i'm wondering how it all fairs now in 2023.

The mellanox cards are CHEEAAAAP!!

thanks!
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
connectx2 work fine basically everywhere, in my experience. i would generally expect anything newer to be about as good.
 

abufrejoval

Dabbler
Joined
May 9, 2023
Messages
20
It used to be the switch prices that had me go cross-eyed... But then they offered a switch-less direct connect mode with route-through and I ticked that option on ConnectX-5 NICs.

I run Mellanox ConnectX-5 100Gbit NICs using somewhat FC-AL like direct connect cables (no switch) on three Skylake Xeons (sorry, much older) using the Ethernet personality drivers in an oVirt 3-node HCI cluster running GlusterFS between them, while the rest of the infra uses their 10Gbit NICs (Aquantia and Intel).

I had originally hoped to use the Infiniband personality for CUDA scale-out work, but in their infinite wisdom just around the Nvidia takeover, that functionality was scrapped by some management guys and switchless Ethernet was the only thing that remained.

I had to go to 4k MTUs to get significantly better bandwidth than 10Gbit when going over a hop, not nearly 100Gbit/s but around 40Gbit/s net and very low latencies, which can't hurt Gluster performance.

For 10G, I can only recommend Aquantia/Marvell AQC107 NBase-T NICs for power effiency, compatibility and ease of use on anything Linux.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
For 10G, I can only recommend Aquantia/Marvell AQC107 NBase-T NICs
Um, no. Chelsio's good, Intel's good; these aren't good, as shown by lots of threads about problems with them--all the more so since this thread is about CORE, which isn't "anything Linux."

As to the question about Mellanox cards, CX-2 cards have been working decently for me (in 10G Ethernet mode) on two of my Proxmox servers, but @jgreco doesn't seem to think too highly of them.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
As to the question about Mellanox cards, CX-2 cards have been working decently for me (in 10G Ethernet mode) on two of my Proxmox servers, but @jgreco doesn't seem to think too highly of them.

They're the Realteks of the 10G world. If you are fine with a finicky card that doesn't work particularly well, but must have a CHEAP CHEAP CHEAP! card, then perhaps the ConnectX-2 is in your future. Be prepared for stuff like learning how to set it between InfiniBand/FCoX/Ethernet modes, etc., and of course being an old PCIe 2 card, it isn't particularly fast or efficient. There's also some quirky stuff with the offload functionality. It's not that it can't be made to work, probably suboptimally, but it is just not a good card for beginners, and not even a good card for pros. It is not a catastrophook like some of the Broadcom or Emulex cards.

Oh and the Aquantia cards SUUUUUUCK.
 

abufrejoval

Dabbler
Joined
May 9, 2023
Messages
20
Oh and the Aquantia cards SUUUUUUCK.
Can't speak for BSD, I only use that on my pfSense, which is good old Intel Gbit all around.

I used to be a BSD fan, especially when Lynn and Bill pushed out 386BSD while I was running UnixWare and Linux truly sucked on a Minix file system.

But that was a long time ago and BSD mostly got stuck in the appliance space, while I run full stacks on Linux.
But yes, I've played with just about very variant of PCBSD, FreeBSD, OpenBSD even Dragonfly every now and then and they failed to lure me back.

The only reason I am looking at TrueNAS again is its Linux transition, so I guess I just got into the wrong crowd here: sorry!
I'll try to pay attention next time!

But in case you're considering going SCALE... (everbody else, please stop reading)




For Linux I've gone through pretty much every 10Gbase-T card that was available (optical wasn't an option) and they were expensive, hot and generally overbuilt to support things like virtualization, FCoE, iSCSI and had all kinds of driver issues or segmented between Windows desktop and server editions.

For the home lab I wanted NBase-T, simple, affordable and low-power. The Aquantias have delivered that for me, I run 6 PCIe v2/3 x4 AQC107 NICs from Asus and 5 Sabrent Thunderbolt ACQ107 NICs on Gen8-12 NUCs running EL8, Ubunto and PopOS as well as Windows 10/2022 (server) without the least bit of problems at the speed and throughput I expect from 10Gbit.

The same NBase-T switches from Buffalo and Netgear also accomodate various 2.5Gbit ports, mostly 2.5 USB3 from RealTek (yes, they work just fine), Gbit and even Intel 225/226, although most of those were replaced with the Sabrent TB 10Gbit NICs.

Aquantia's main innovation was to bring the PHY power requirements down from the 10 Watts the first generation 10GBase-T adapters required to something like 3 Watts at 10Gbit, proportionally less for 5/2.5 and 1GBit/s. And they did this for the NICs and the switch chips, most of the affordable Nbase-T switches have Aquantia/Marvell ASICs inside.

When they came out eigh years ago, the driver situtation wasn't great, I remember having to compile drivers on EL7 systems, but I had to do that for 2.5 Gbit USB NICs, too.

But ever since their drivers went upstream around the 4.9 kernel, that's gone away and nothing is as simple to use in the 1-10Gbit NBase-T range as Aquantia: it just works out of the box, nothing to worry about.

Try that with Intel these days!

I used to be a fan of their NICs, like many others, but they've botched 2.5Gbit and for a long, long time not delivered anything low-power, economic or fully NBase capable above that, while the sheer number of device variants and revisions require serious study: I consider 10G capable NBase-T commodity, pretty much like Gbit used to be.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
...and there's your more fundamental problem.

Well, this is a side effect of how 10G evolved. I talk in the primer about how 10M/100M/1G developed with about three years in between, and 10G followed pace more or less, BUT only for optical. 10G copper stalled, as a practical matter, for about ten YEARS, and even then didn't become practical and common until ... well truth be told, it may be a few more years from now.

expensive, hot and generally overbuilt to support things like virtualization, FCoE, iSCSI and had all kinds of driver issues

Right, the early adopters generally wanted/needed those, because for most purposes, 1G had reached the point of sufficiency for desktops and other endpoint applications. It was server oriented usage where faster-than-gig was really helpful, and the feature sets on SFP+ cards had continued to evolve in the decade between 2003-2013 with stuff like advanced offload and virtual function support. It should come as little shock, then, that when 10G copper vendors tried to sell in that marketplace, expectations were for general feature parity. It had also become relatively cheap to do the silicon, but the additional complexity also made things such as drivers much more complicated.

When they came out eigh years ago, the driver situtation wasn't great

Understatement award.

Aquantia: it just works out of the box, nothing to worry about.

That hasn't been the case on the forums here, lots of people had significant issues. If the driver now works, that's sorta great news, except that Aquantia is still copper, which more or less requires copper on the switch side too. We just don't seem to be seeing a lot in the SOHO/hobbyist realm for that.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
copper on the switch side too. We just don't seem to be seeing a lot in the SOHO/hobbyist realm for that.
ServeTheHome came out with a video just the other day reviewing 2.5G and 10G copper switches--it may be this is finally starting to shift. But the other issue, of course, is that there's tons of solid surplus 10G SFP+ gear out there.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
FWIW, I tried out some of the stuff advertised on STH re: switches and came away seriously unimpressed.

The mokerlink 2G05210GSM SFP+ switch that allegedly allows managed Ethernet interfacing with 2.5GbE copper as well as 1GbE was frankly a hunk of junk. Buyers should be able to expect:

A) a working OEM web site
B) the correct manual to be included in the box with the switch
C) periodic updates to the firmware to address known issues

The Zyxel SFP+/2.5/1G competitor works better, likely by virtue of being un-managed, from a company with a working website, & fewer QC issues.

Contrast this with your average SFP+ switch like the CRS305 and the comparison is not even close. Yes, at a higher price point, but the stuff simply works. VLANs behave as expected, can be managed, etc. I’d rather pay more and get what I wanted than working around known issues.
 

abufrejoval

Dabbler
Joined
May 9, 2023
Messages
20
2nd hand surplus obviously changes the price dynamics quite a bit, but for me RJ45 or 10Gbase-T was mostly about interoperability and the availability of cables: Cat 6+ or 7 was deployed by default at work, so upgrading some stuff to 10Gbit around 2008 was just upgrading NICs and switches: easy, but money and power consumption didn't matter much there.

In the home lab it obviously had to be a lot cheaper, but it was also about noise/power: no way I was going to tolerate any of those managed enterprise switches designed for computer rooms, it had to be passive or nearly as quiet. So I had to start with direct connect cables between the primary machines for RAID backup.

Then the first NBase-T switches came along perhaps 8 years ago at below €100/port and fans that could be hacked with Noctua: unnoticeable!

Replugged some cables, left most on 1GBit and then began a slow transition, box by box as they grew and started to profit from more bandwidth.

Having three performance/cost points via NBase-T really helped, 1-2.5 was really easy going a RealTek USB3 dongle, 10Gbit required having 4 free PCIe lanes (or TB), not always that easy, 5Gbit made very little sense on USB, for PCIe x1 slots it might have been a good match...

In the DC its mostly optical or direct connect these days, I don't care any more, I just order the connections and let colleagues do their job.

In the home-lab I don't care either: I just use a good enough quality of cables at the length I need and it's plug & forget: Bandwidth is only a NIC attribute, I got plenty of other stuff to worry about.

With optical, there is an endless number of varietes for cables, transreceivers, GBICs, direct connect at various lengths, manfacturer lock etc. It's a whole science on its own. If you enjoy that, great!

I'd much rather prefer running PCIe or CXL over Thunderbolt switches instead of Ethernet: IP over TB using Intel NUCs was the easiest cabling ever done, if only it didn't create random MACs on every topology event!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
With optical, there is an endless number of varietes for cables, transreceivers, GBICs, direct connect at various lengths
Sure, options are bad.

I mean, you're right that there are lots of options, but "SR optics and OM4 patch cables with LC connectors" gets you 10G out to 400 meters or more. It gives you at least 100 meters at up to 100 GbE. It's the easy button, and it's relatively future-proof. Need to go thousands of meters? Sure, you can do that too, but that naturally gets more specialized--but it's far easier to do with fiber than with Cat8 (it's still just two optics and one length of fiber to go up to 10 km at 100 GbE).

It's true that fiber is more expensive per endpoint. Using compatible optics from fs.com, it takes two (one for each end) at $20 each, plus the patch cable itself (figure $10-15 for the nice Uniboot stuff). Not exorbitant, but definitely more than Cat8. You can cut that in half (or better) with used optics from eBay and standard duplex patch cables. Direct-connect can cut quite a bit off even that; those are running about $14 at fs.com, but of course you can't go nearly as far on that.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
With optical, there is an endless number of varietes for cables, transreceivers, GBICs, direct connect at various lengths, manfacturer lock etc. It's a whole science on its own. If you enjoy that, great!

I'd much rather prefer running PCIe or CXL over Thunderbolt switches instead of Ethernet: IP over TB using Intel NUCs was the easiest cabling ever done, if only it didn't create random MACs on every topology event!
In most homes, I’d wager OM3, multimode @ 850 is likely the only optical you’ll ever need. It’s cheap, runs cool, and has the added benefit of electrically isolating various bits of your topology. Vendor locks can be a thing but 10gtek figured that issue out long ago.

As for Thunderbolt over IP, I had a whale of a time with QNAP when they brought out their NAS/DAS combo unit and claimed it would just work. Not really, if you didn’t have a thunderbolt port dedicated just to connecting the QNAP to the CPU since the QNAP will usurp all devices on the Thunderbolt bus.

So, if your laptop / desktop has plenty of Thunderbolt buses to plug into, great. Not great w/a MacBook Air whose single Thunderbolt bus had the monitor, NIC, and Dock get commandeered by the QNAP.
 

acp

Explorer
Joined
Mar 25, 2013
Messages
71
connectx2 work fine basically everywhere, in my experience. i would generally expect anything newer to be about as good.
That's been my experience with ConnectX-3. I'm been using 3m DAC cables with a MikroTek switches. The only issue I ever had was when direct connecting 2 cards. That may have been a driver issue. That was the early days, since adding switches never went back to try it again.
 

jcizzo

Explorer
Joined
Jan 20, 2023
Messages
79
This is all really good stuff! I don't have anything as sophisticated as you folks. I'm still learning the whole 'nas' thing. I'm coming from pfsense as my firewalls and i find the freebsd os to be pretty spectacular, especially considering it's cost. but i digress.

my nas will be simple and low-powered (i3-7100T) on a supermicro motherboard, with 16-32G of ECC. in trying to keep with the low power usage approach, i'm trying to figure out the nics. the intel stuff is very expensive and unless i get the expensive cards (xl710 and up), they use a good bit of power, so.. if i have to go that route, i will.. however folks seem to rave about the chelsio stuff and looking at their typical power usage, the seem to fit the bill, and can be had off ebay for reasonable cost. I see the mellanox stuff too for even less but one guy says they're great, the next says their junk.. so.. and like the chelsios they seem pretty reasonable with power consumption.

after all of this, i'll probably wind up with a chelsio t580.. people seem to rave about them
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ServeTheHome came out with a video just the other day reviewing 2.5G and 10G copper switches--it may be this is finally starting to shift. But the other issue, of course, is that there's tons of solid surplus 10G SFP+ gear out there.

Yeah, so, we've basically lost ANOTHER decade. I was hoping when I wrote the 10G Primer that we were finally on the cusp of a crowd 10G copper switches but really all that staggered out the backdoor of the networking bar, drunk and stupid, was the hot loud Netgear thing, which stumbled around looking for a place to vomit before collapsing in a pile.

 

abufrejoval

Dabbler
Joined
May 9, 2023
Messages
20
Sure, options are bad.

I mean, you're right that there are lots of options, but "SR optics and OM4 patch cables with LC connectors" gets you 10G out to 400 meters or more. It gives you at least 100 meters at up to 100 GbE. It's the easy button, and it's relatively future-proof. Need to go thousands of meters? Sure, you can do that too, but that naturally gets more specialized--but it's far easier to do with fiber than with Cat8 (it's still just two optics and one length of fiber to go up to 10 km at 100 GbE).

It's true that fiber is more expensive per endpoint. Using compatible optics from fs.com, it takes two (one for each end) at $20 each, plus the patch cable itself (figure $10-15 for the nice Uniboot stuff). Not exorbitant, but definitely more than Cat8. You can cut that in half (or better) with used optics from eBay and standard duplex patch cables. Direct-connect can cut quite a bit off even that; those are running about $14 at fs.com, but of course you can't go nearly as far on that.
I guess homes come in different sizes.

In mine it's very hard to exceed the 50 Meters Cat 7 cable is supposed to do in any direction: I guess I should have tried harder...

In fact most of the 20-some computers in my home-lab sit under my desk, the NUCs snuggle really tight, while the switches hide behind a wall of screens on top of it. 3m cables are almost an excess, 1m cables connect the switches. My main pride is how all that is next to inaudible and the birds outside are much louder.

The kids gamer PCs are scattered around the house and still on Gbit but on Cat 7 wires. After SSDs became so dirt cheap their main bottleneck is the Gbit Internet uplink (hey, that's fiber!): Not enough East-West traffic to warrant an upgrade, but at the current prices of NBase-T equipment I'm considering upgrades just for fun.

BTW: The last 10 meters of the Internet are actually COAX, because that was already there for the TV antenna. Runs 2.5 Gbit so there is even some headroom for the fiber...
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
members who have run fiber out to a garage
That would include me. And while I don't think the length was over 50m, it wasn't much less, either.
 
Top