Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

NIC options for 2.5GbE?

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

Valantar

Junior Member
Joined
Apr 11, 2021
Messages
23
I've recently built my first TrueNAS setup, I'm starting to get it set up like I want it, and the only thing really missing is to get a better NIC - both because the integrated NIC in my motherboard is Realtek, and because I want to wire up my apartment for 2.5GbE soon. However, 2.5GbE on TrueNAS seems like a challenge.
- From what I've found, there are no FreeBSD drivers for the Intel I225, nor are any planned
- Realtek doesn't do FreeBSD drivers, so their 2.5GbE controllers are out
- The ubiquitous Intel X540 doesn't support anything in between 1GbE and 10GbE

So the options seem to be either the Intel X550, which is rather expensive, or something Aquantia/Marvell AQtion based, which is a bit cheaper, but also not that easy to find since Marvell bought Aquantia (Qnap seems to offer a few decent alternatives though). Any recommendations here? I can't seem to find any clear cut recommendations on the forums.


(And please no suggestions that I instead look for used enterprise 10G SFP+ gear - it's both outside of my budget and not feasible for my setup due to the inability to terminate my own wires. 2.5GbE fits my needs perfectly and should last me for the next 10 years easily.
 

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
4,728
Didn't we already see this post?

Yup, it's part of this one:

If you're going to ask the same question again, at least take into account the time that @jgreco spent answering you and adjust the question accordingly (particular reference to your last line).
 

Valantar

Junior Member
Joined
Apr 11, 2021
Messages
23
Didn't we already see this post?

Yup, it's part of this one:

If you're going to ask the same question again, at least take into account the time that @jgreco spent answering you and adjust the question accordingly (particular reference to your last line).
Sorry if I came off as if I was reiterating an already asked question, but frankly I don't quite think I am. Yes, I asked about networking in that post (and indeed stated that I wanted a 2.5GbE setup) and yes, I was recommended 10G SFP+ parts. After clearing up why this isn't an option to me, that part of the discussion stopped, and there wasn't any concrete advice to be found beyond reading further. Which is precisely what I've been doing since. But 2.5GbE discussions seem pretty sparse and inconclusive, hence posting this as a separate thread. (For the record, I was also strongly recommended to post separate questions as separate threads in that thread, and at this point I find it rather unlikely that me bringing up a desire for further advice in that thread is likely to reach anyone new.) The current hardware guides don't really mention anything in between 10GbE and 1GbE, which is understandable given that 2.5 and 5 are much rarer and newer. But I'm still struggling to make sense of whether there's any reason to spend 2-3x the money on an Intel NIC (An Intel X550 within Europe is ~2000SEK, while a 5GbE Qnap Aquantia NIC can be had for ~700). Is Aquantia/Marvell support that bad? I've seen various reports, some with some setup trouble, others with getting it working smoothly and easily, but it's also very difficult to judge how relevant these are for someone new to TrueNAS. As someone mentioned in another thread, if it's marked as "in development", but works, what's the difference? And is there reason for someone with non-enterprise demands to avoid it?
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,957
The problem here is that sometimes the thing you WANT isn't the thing that IS.

Aquantia/Marvell support is pretty bad. This is the case for various reasons, but when it comes right down to it, it is because the cards with the best support usually have driver teams at the chipset vendor who are not just trying to rush a crap-ass chip out the door.

The driver teams at Intel, Chelsio, Solarflare, etc., are all focused on creating high performance drivers in order to be able to sell their products to the data center market. If Chelsio (just for example) makes a great chipset but the driver can't cut it, the data center and HPC guys simply buy the Solarflare which has great chipset AND a great driver. There is an incentive for these companies to write great drivers.

Companies like Realtek and Marvell generally are producing commodity chipsets for the consumer market, and so they do not have massive budgets to pay expensive and talented device driver authors, especially for non-Windows operating systems. The drivers for these devices are often reverse-engineered by third parties, and I certainly have nothing against the genius talents of people like Bill Paul to design drivers from datasheets, or, worse, reverse-engineering, but you usually won't get a world-class driver out of that.

So, now, stop for a moment and consider what's going on here with 2.5G and 5G ethernet.

In the early 1990's, we had 10Mbps ethernet. In 1996, 100Mbps ethernet. In 1999, gigabit ethernet. In 2002, 10 gigE. About every three or four years, an order of magnitude increase in network speeds was introduced. In all cases but the last, within about 5 years of introduction, reasonably priced commodity gear became available for that technology. We stalled out with 10G because the technology became more difficult. Copper based 10G wasn't practical at first. Further, and perhaps unexpectedly, it seemed that gigabit was actually finally sufficient for many or even most needs.

It took until around 2013-2015 for 10G copper ethernet to truly become a thing, though, which is years longer than any previous generational jump. This is really because we're reaching limits to this particular type of copper technology. This may well be the end of the evolution of copper RJ45 ethernet technology. Uptake for 10G copper was extremely slow, because almost anyone who needed it had already gone to 10G fiber, which is cheap and easy.

A lot of noise has been made about 2.5 and 5G, but the only thing that these really have going for them is that they can support PoE. This is basically a massive swindle by the industry to try to sell everybody on technology that is not a step forward.

So here's the thing. You have bought into a technology being promoted by grifters. They're relying on the fact that the experience people have had with 1G is subpar, which is ironically often because their own 1G technology is subpar. They are now selling you subpar 2.5G or 5G.

None of the inexpensive 2.5G or 5G chipset vendors are putting driver development effort into anything but Windows. The reason is that there is no return on such an investment. They want to be able to sell new ethernet cards to people who already have 1G ports. They don't give a crap about anything else. They want to sell you a card and make a profit doing so.

So this really comes down to "you get what you pay for."

yes, I was recommended 10G SFP+ parts. After clearing up why this isn't an option to me, that part of the discussion stopped, and there wasn't any concrete advice to be found beyond reading further. Which is precisely what I've been doing since. But 2.5GbE discussions seem pretty sparse and inconclusive, hence posting this as a separate thread.
So you were pushed in the direction of the high performance, high quality, available-inexpensively-on-the-used-market 10G stuff. You didn't want to hear it. That's fine. Go buy whatever you want and do whatever you want, and I promise you it will work well unless it doesn't.

The people here generally focus on what works well. The lack of responses to your inquiry should indicate the level of success people have had. It doesn't mean that you can't make it work, but it may not work well.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,368
To me, the main appeal of NBase-T technology (2.5, 5.0, and possibly 10GbE) is the possibility of running higher speeds on older copper plants without having to update said wiring. Especially if it's behind closed walls, covered by expensive-to-repair surfaces, and so on. Never mind the disruption the new wire installations + cleanup will cause. So, on the surface it makes some sense, especially now that the associated switches are becoming more affordable and the NICs are frequently included in Docks (see OWC Thunderbolt 3 Pro dock, as an example) or even built into computers.

The devil is in the details, however, as @jgreco points out. Driver quality is a thing and the 10GbE driver is not available for Mac OS users as a standalone package as in the past. Nope, that driver is only included in OSX 10.13.6 and up, something that isn't mentioned unless you drop into the technical specifications section. Not sure if this Aquantia chipset works well on OSX or not, I returned the dock because at the time I was using OSX Sierra (10.12.x) and didn't want to deal with the then-new Apple File System in High Sierra until most of the bugs had been ironed out.

So here is my take: If you are installing a new plant in your home, I would suggest going with structured cable, which gives you the option of including optical as a future-proof option. Even if its Multi-mode OM1 (the oldest there is), the stuff will be just fine for the next 20 years. Step up to OM3, and you'll likely be fine for the next 100 years. The nice thing about structured cabling is that the outer jacket protects the contents really well. The only thing to watch out for is a bad fiber termination, i.e. your electrician treating fiber like copper.

I bit the bullet and re-terminated every fiber drop in the house to repair the damage at $30 an outlet. The used professional tools cost me ~$1000 on eBay/Amazon and I resold them quickly for a $200 loss after I verified all the fibers. It's really not that hard as long as you leave yourself adequate slack on both ends, can follow simple instructions, make a couple of practice connections first, and make sure the fibers are well-installed.

Plus, high-speed optical transceivers are a lot cheaper, produce less heat, and face no installation limits in inexpensive SFP+ switches like copper transceivers do. Lastly, optical is also inherently immune to lightning strikes and other nearby induced voltage potential nastiness. I use optical connections for some of my stuff for that reason.
 

rvassar

Neophyte Sage
Joined
May 2, 2018
Messages
685
FWIW - I have one of the little $140 4+1 port Mikrotik 10GbE switches. I run a couple 10GbE clients using older Mellanox SFP+ cards that I picked up for all of $16 each on eBay. The TrueNAS box and one client connect to the switch using cheap old-school twinax SFP+ copper. I have a workstation that's a few meters too far away, and it uses OM3 fiber. The remaining two ports, the native 1GbE copper goes to the house network, and the remaining SFP+ plug has a copper transceiver for my local 1GbE clients hosted on another switch.

Caveats I've discovered:
1. The 1GbE copper SFP transceivers do not auto-negotiate with the Mikrotik SFP+ sockets. You have to manually set the speed.
2. The native 1GbE does not support PoE input as a redundant power source.
3. The older Mellanox ConnectX-2 EN NIC's are older PCIe 2.0. How that plays out depends on what & where you install them.
4. Don't drop the fiber ends with the caps off. The tips are quite fragile. o_O

Total investment on my part... Maybe $250. $300 if I count the spares.

The Mikrotik SFP+ slots with a 10GbE copper transceiver might negotiate 2.5 or 5 GbE for your extended runs. I have not tried it. The 10GbE copper transceivers are kind of touchy, power hungry, have compatibility problems, and are slightly more expensive... But you might get it to work. The nice thing having the NAS at 10GbE will be each lower speed client will get the full 2.5 or 5GbE wire speed. That's why I dropped the 1GbE copper SFP in mine. All the home-lab traffic in my office gets either 10GbE or 1GbE separate from the rest of the house.
 

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
4,728
Another point may be that you could consider a switch that supports 1 or a couple of 10G ports (and use those for TrueNAS) and has a bunch of others supporting milti-G for your clients and longer/older cable runs.
 

rvassar

Neophyte Sage
Joined
May 2, 2018
Messages
685
Following up... There's a review of 10GbE SFP+ to RJ45 copper transceivers over at STH. Several of them are confirmed to support 2.5 & 5GbE speeds. They apparently work like a small two port switch, with the SFP+ side at 10GbE, and the RJ45 able to negotiate down.

The Mikrotik switches should (maybe?) have a more favorable tax status for you in Europe. So you only need to find a supported SFP+ 10GbE card for the TrueNAS machine, a SFP+ twinax patch cord, and source the SFP+ to RJ45 modules. Then you can use whatever 2.5 or 5GbE copper NIC you want in your clients, and do your own wire runs & termination.
 

Valantar

Junior Member
Joined
Apr 11, 2021
Messages
23
First off, thanks for all the replies! A lot of useful stuff here, hopefully I can address it all. Took me quite a while to get through it all.

First off, @jgreco, while I appreciate you getting into things, I think there's a disconnect between my questions and how you're responding. I'm essentially coming in here saying "hey, I'm trying to figure out this stuff, I've gotten part of the way but now I'm stumped, could you help me out?", while my impression of the tone of your responses is as if I was coming in here demanding stuff or making claims of some kind. I'm not. I understand the frustration of advising newbies on stuff that is child's play to you, but at least in this case I hope you can take a step back and see the difference. I'm literally asking for advice here. I'm also trying to explain to you how your advice might not be suitable to my use case, regardless of its technical superiority in an absolute sense. There's a few specific things I'd like to address, but I'll stick this in a spoiler tag to avoid too much of a wall of text.
It's pretty obvious that consumer facing companies provide drivers only for consumer platforms, and that enterprise ones provide better enterprise platform drivers. I never said I didn't understand why this is, if I was asking about drivers I was asking whether they exist and/or work. Figuring out the latter - which is pretty specific information - is quite a lot more challenging than having a general understanding of how markets or driver development works, after all. And looking at this from the outside, there's a rather explcit level of condescention in your response.

As for cabling and signalling standards, we're rapidly running into physical bottlenecks across the board with copper - from PCIe to DisplayPort to Thunderbolt to HDMI, we're rapidly reaching where we need fiber or active cabling for any significant length - so it's not exactly surprising that 10GbE is a lot more challenging than GbE, or that fiber is better when you want really high bandwidths, especially across longer distances. Again: I don't have an issue understanding that there are inherent limitations and challenges in play here. But I frankly don't see them as very relevant to me. Which I've also been trying to explain to you in the other thread.
A lot of noise has been made about 2.5 and 5G, but the only thing that these really have going for them is that they can support PoE. This is basically a massive swindle by the industry to try to sell everybody on technology that is not a step forward.

So here's the thing. You have bought into a technology being promoted by grifters. They're relying on the fact that the experience people have had with 1G is subpar, which is ironically often because their own 1G technology is subpar. They are now selling you subpar 2.5G or 5G.
Here, as I pointed out in the other thread, we're looking at a pretty major difference in perspective. I'm taking a guess here and saying that you probably work in IT, maybe even with server infrastructure. Whatever the case may be, it's pretty clear that your requirements and mine are worlds apart. I certainly wouldn't describe my experience with 1GbE as sub-par - it's a bit slow, but it's dead stable, plug-and-play, dirt cheap and ubiquitous, lets me make purpose-built cabling with a $10 tool and is easily user serviceable in all relevant ways. Outside of large transfers (which are rare) and photo editing over the network (which is more frequent, but maybe a monthly activity), GbE would be perfect for my needs. I just want a bit more.

The reason for 1GbE stagnating as the standard for home users is also rather obvious: nothing else has been even remotely necessary for outside of edge cases. Networking for 99.999% of home users means internet access, and internet connections are overwhelmingly <100mbps. It's only in recent years that HDDs are significantly exceeding GbE speeds, SSDs for networked storage are still barely a thing. Fiber internet is bringing faster internet speeds to more people, but most fiber connection are still in the 100-200mbps range. So why change something that works, does everything the vast majority needs it for, and costs next to nothing? It's been perfectly fine.

Of course, I'm not really happy with GbE. But that is only down to transfer speeds. It is literally the only thing I want improved. I'm well aware of this being a luxury desire - but given that nGbE is (finally!) proliferating, I was hoping it would be a rather small luxury. So far it's looking a bit worse than I was hoping - but still within the realm of possibility. Now, I understand that 10GbE is inefficient and hot - but I neither want nor need 10GbE. I would literally never make use of the bandwidth. I'm only talking about 10GbE hardware as it seems to be the only way to get anything above GbE on TrueNAS. Of course it might be that a 10GbE NIC in 2.5GbE mode consumes the same amount of power, but a few watts more in one computer hardly matters. Current gen Intel and Realtek 2.5GbE chipsets reportedly barely consume more power than their GbE chipsets. Of course this doesn't help me given that these aren't supported in TrueNAS, but it illustrates how the increased power draw of a (single) 10GbE NIC is something I'm willing to accept if needed. After all, I'm not going to be running massive switches with tons of connections, so a few watts more per 10G port doesn't matter to me. I'm also working within the confines of an ~800ft² apartment, so whether or not a reliable signal can be maintained above, say, 50m with Cat6 cabling is also entirely irrelevant to my use case.

You're talking as if nGbE is useless and the only natural step would be for the consumer world to move to 10GbE fiber-based networking as a next step past GbE. I think you're missing a lot of perspective on just how massive that step is for consumer applications. Backwards compatibility goes out the window. Installation prices shoot up - 2.5GbE chipsets are $3-4, after all. There's a reason why they're found on every $150 motherboard these days. You're not getting 10GbE in that price range, no matter what - that's more of a $400 motherboard feature. Then there's cabling (brittle/fragile, weird new connectors, no end-user termination, large bend radii making for cumbersome installation), not to mention replacing cabling for those who already have Ethernet in their house. And so on and so forth. All the while, the benefits in the end-user space for 10GbE over 2.5 or 5GbE are entirely negligible. Servers and datacenters take whatever bandwidth you can give them and make use of it - that's not the case whatsoever for home users. There's a noticeable difference to me if my NAS can deliver ~220MB/s rather than ~90, but the change from 220 to 900 might not be noticeable at all - my storage isn't that fast, nor is there that much load on the network! And I'm not likely to move to an SSD-based pool any time soon.

For most end users, GbE is still perfectly fine - including crappy Realtek hardware on supported OSes - but some of us are starting to see those transfer speeds as a bottleneck. nGbE, whether 2.5 or 5, is a perfectly logical stepping stone towards improving that - it's well matched with accessible storage today - and is specced to realistically cover our wants and needs for the next decade if not longer. Your attitude is undoubtedly based on a lot of experience, but from what you're saying here, it also fundamentally fails to take into account the actual real-world circumstances of people like me.

I completely understand that we're coming from very different worlds. That much is clear. It seems that a similar understanding is completely absent in your responses, however. You can reiterate how fiber is technically superior till you're blue in the face - it still won't make it suitable to my use case. It's too large of an investment (either money, time or both), too complicated (I'm already working on one doctorate, I really don't want a second one just to set up a >GbE home network!), requires too much bespoke hardware, and is really not suited to the hardware I need it for. I made this thread based on looking further into this after our previous discussion, feeling like I'd come as far as I could based on what I'd read. The thread asks a very specific question. Your response is equivalent to me asking "I've decided to buy a small hatchback, should I get a Toyota or Honda?" and you responding with "hatchbacks are crap and a scam, get a van or SUV instead".

I was really hoping to avoid having to reiterate this. Which is why I wrote what I did in the initial post here. 10G SFP+ simply doesn't fit my needs. It's completely excessive. Going Ethernet might have a higher baseline cost, but that discounts both the work I'd need to do to learn enough to find good deals in the SFP+ world, and the cost difference of buying off-the-shelf parts vs. importing off Ebay here in Sweden (+25% VAT + ~$10-15 processing fees for anything entering from outside of the EU). With Ethernet, I already have the cabling and tools required, know how to deal with every link in the chain, and can buy parts locally. With SFP+ I'd avoid that expensive NIC, but I'd either need to buy (kind of expensive) SFP+-to-Ethernet converters for hooking up the non-SFP+ devices (which would be everything except the NAS), get SFP+ NICs for everything (not an option for my main desktop or the HTPC, both of which are ITX) or buy a $600+ switch that supports both, and either way I'd need to source potentially expensive cabling or import used cabling off Ebay. I'd also need to learn what, to me, looks like a complete mess of compatibility regarding switches, transcievers, cabling standards, and so on. It would undoubtedly net me a technically superior setup - the benefits of which I won't ever actually see. Instead, I would spend a lot more time and effort (and potentially, though not necessarily, money) on this, rather than just putting together something that fulfills the needs that I have and works with what I've already got.

Hence why I started this thread asking a very specific question: given that only the Intel X550 and various Aquantia NICs support nGbE and have some semblance of FreeBSD support, is there a reason to splurge on the Intel NIC at 2-3x the price of an Aquantia? A 'hey-listen-here-now' half-rant about why SFP+ is superior to Ethernet doesn't help me answer that question - though I guess indirectly you did confirm that Intel drivers are simply more stable (or established/accepted), and that going for Aquantia would be more of a gamble. But that could have been answered in two sentences, without the condescending lecture assuming I have no idea what I'm talking about.
Tl; dr: I'd appreciate if you stopped trying to shove a technically superior but unsuited for my use case solution down my throat. And please drop the condescending tone. My situation and yours are not the same, and you're entirely discounting the context for my questions, which makes your response both condescending and ill-suited to the task.
I bit the bullet and re-terminated every fiber drop in the house to repair the damage at $30 an outlet. The used professional tools cost me ~$1000 on eBay/Amazon and I resold them quickly for a $200 loss after I verified all the fibers. It's really not that hard as long as you leave yourself adequate slack on both ends, can follow simple instructions, make a couple of practice connections first, and make sure the fibers are well-installed.
I just had to quote this part as it's another apt illustration of the "different worlds" issue in this discussion. In my reality, the idea of having $1000 laying around to (even temporarily) spend on tools is ... absurd. Good for you, I guess, but that is not the reality I live in, even if I were to get most of it back at some vague point in the future. Heck, that $200 loss you took re-selling the tools would constitute a major part of my overall budget here. If an additional $200 wasn't an issue, I'd just buy an X550 NIC and never ask this question in the first place.

I don't really have an issue with the rest of what you're saying, but it's overkill for my needs. 2.5GbE is pretty much perfect for me, and the only remaining question is how to get it into my TrueNAS build.
Another point may be that you could consider a switch that supports 1 or a couple of 10G ports (and use those for TrueNAS) and has a bunch of others supporting milti-G for your clients and longer/older cable runs.
That would be a great option, but sadly the only switches I've found supporting that are $600+. IIRC there are some $300-ish switches with a couple 10G SFP+ ports and a bunch of 1G Ethernet, but that's completely useless to me. The appearance of consumer-oriented 2.5GbE switches at reasonable prices (~$130-180 depending on 5 or 8 ports) is why I'm trying to put this together, as it makes >GbE networking economically and practically feasible. And given how new nGbE is, the used market is of no help either.
Following up... There's a review of 10GbE SFP+ to RJ45 copper transceivers over at STH. Several of them are confirmed to support 2.5 & 5GbE speeds. They apparently work like a small two port switch, with the SFP+ side at 10GbE, and the RJ45 able to negotiate down.

The Mikrotik switches should (maybe?) have a more favorable tax status for you in Europe. So you only need to find a supported SFP+ 10GbE card for the TrueNAS machine, a SFP+ twinax patch cord, and source the SFP+ to RJ45 modules. Then you can use whatever 2.5 or 5GbE copper NIC you want in your clients, and do your own wire runs & termination.
That's not the worst solution, though it quickly gets expensive - if these are the transcievers you're talking about, that's ~$55 apiece. If I went that route and found a $150 Mikrotik switch and a cheap NIC + cable for the NAS, it wouldn't take many transcievers to tip the scales in favor of just splurging on a more expensive copper NIC for the NAS and going with a native 2.5GbE switch. Which also has the advantage of me not needing to keep my 1G switch around (4 ports isn't enough).

It probably just serves to demonstrate just how little I know about SFP+ and enterprise networking gear, but why can't these transcievers be plugged into an SFP+ NIC? If that was possible, that would be an ideal solution - cheap SFP+ NIC with a $55 transciever plus a native copper switch, and I'd be done. But the review makes me think these can only be connected to a switch?


Still, I think I have my "answer": I can gamble on saving a buck and buying a not-really-supported Aquantia NIC that may or may not work, or I can pay for the simplicity and solid support of an Intel NIC. The price difference is significant enough that I'll have to read up some more on experiences using Aquantia NICs (though if I get one and it doesn't work out I could always return it - one of the advantages of buying locally), but I might end up concluding that Intel is the way to go. Either way, thanks for your input! I now have a much clearer overview of where I need to continue my research.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,368
Well, that type of response is one way to destroy goodwill.

FWIW, I could have resold the tools at cost, I just wanted them out of the house quickly. If fiber is not the right move for you, no worries. However, at present, for high-speed ethernet it is likely less expensive to run fiber than copper if you can use pre-confectioned cable assemblies. If you have to run raw stuff through the walls, termination adds a lot of headache for fiber, less so for copper. But copper is also not very performant in comparison - RJ45 SFP+ transceivers run hotter, are more expensive, have less range, and cost more than optical alternatives.

If you can go SFP+, then twinax DACs are also an inexpensive option for short connections. Switch to NAS, for example.

Good luck with your project.
 
Last edited:

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,957
First off, @jgreco, while I appreciate you getting into things, I think there's a disconnect between my questions and how you're responding. I'm essentially coming in here saying "hey, I'm trying to figure out this stuff, I've gotten part of the way but now I'm stumped, could you help me out?", while my impression of the tone of your responses is as if I was coming in here demanding stuff or making claims of some kind. I'm not. I understand the frustration of advising newbies on stuff that is child's play to you, but at least in this case I hope you can take a step back and see the difference. I'm literally asking for advice here. I'm also trying to explain to you how your advice might not be suitable to my use case, regardless of its technical superiority in an absolute sense. There's a few specific things I'd like to address, but I'll stick this in a spoiler tag to avoid too much of a wall of text.
[...]
Tl; dr: I'd appreciate if you stopped trying to shove a technically superior but unsuited for my use case solution down my throat. And please drop the condescending tone. My situation and yours are not the same, and you're entirely discounting the context for my questions, which makes your response both condescending and ill-suited to the task.
See, I do this stuff professionally, but I'm also the guy who's spent countless hours helping newbies get up to speed on FreeNAS, including having written a lot of the resources about networking, 10 gig, etc. I do this because I want people to find success in an area that is admittedly complicated.

Because I have worked with this stuff professionally for many years, and have been working with FreeBSD for many years, I see the totality of the issues involved. So I spent some significant time composing a bespoke response to you that explained what was going on and why 2.5G isn't well-supported, and isn't likely to be well-supported until driver updates arrive, probably in the form of the i219-v driver, which may not be until FreeBSD 13 rolls around.

I am trying to "shove a technically superior but unsuited for [your] use case solution down [your] throat" because it is currently the path to success for what you are trying to do. If you do not like that answer, fine, but there is not really another answer floating around. Do not blame that on me, and also I suggest not being rude by calling it a condescending tone because you're not hearing the thing you want to hear. When we don't have a good answer for people, often we try to put as much raw information out there as possible so that you can have a better understanding of the situation, and maybe salvage something out of the situation, but if you are going to waste time trying to interpret that in a way that you can be offended by it, you have wasted both your time and mine, and I do not appreciate that.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,368
Not sure that 2.5GB ethernet even makes sense. OP wants a inexpensive solution to interface with a single mirrored pool consisting of two drives and a another standalone pool consisting of a single HDD. The IOPS / transfer are unlikely to reach the limits of 1GbE networking on a sustained basis. Drive limits are likely around 130MB/s and 1GbE network speeds max around 120MB/s.

If on a budget, why bother with 2.5GbE networking if the NAS doesn't feature a SSD pool or a multi-VDEV HDD pool? The added bandwidth is unlikely ever to be noticed. Spend the money on something more productive, like a mirror drive for the media pool.
 
Last edited:

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,957
Following up... There's a review of 10GbE SFP+ to RJ45 copper transceivers over at STH. Several of them are confirmed to support 2.5 & 5GbE speeds. They apparently work like a small two port switch, with the SFP+ side at 10GbE, and the RJ45 able to negotiate down.
If that actually works, that'd be pretty cool. We'll have gone from "copper SFP+ is a sucky boondoggle" to "hey here's something useful" in just a handful of years. If anybody tries this and it works, let me know. I'm not burning with curiosity to the level where I'm going to go and fund a bunch of 2.5/5G gear to build a test lab, but I'm certainly fine with adding some notes to the 10 Gig Networking Primer on this...

Even if that doesn't pan out, getting one of the four-port Mikrotik SFP+ switches, which support copper SFP+ modules that do 2.5 and 5GbE speeds, is a plausible fallback position as well.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,368
Keep in mind that even the official Mikrotik Copper RJ45 S-RJ10 SFP+ module still has thermal issues, unlike their optical transceivers.

Let's start with the thermal guidance, where Mikrotik recommends limiting the number of S-RJ10 modules in a device, even the actively-cooled ones, thanks to heat and power supply issues. Mikrotik prefers open SFP+ cages between S-RJ10 modules but will tolerate optical units to be adjacent. Ideally, split S-RJ10 across all "cages" for power and heat, avoiding vertically or horizontally-adjacent S-RJ10 modules.

The popular CRS305-1G-4S+IN 4-port SFP+ switch can handle a maximum of two of these modules. The rest of their switches/gateways/routers also usually have limitations, see here.

I take issue with Mikrotik's documentation re: RS-RJ10 limits, which should be associated with each product rather than spread out all over. Too easy to make a mistake.
 
Last edited:

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
4,728

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,368
Follow up, as of 12u4, TrueNAS can support the RTL8125 chipset as long as you set two rc.conf tuneables. Not sure why ixSystems announced RTL8125 support yet still requires the two tuneables to be set but such is life. See here.
 

Valantar

Junior Member
Joined
Apr 11, 2021
Messages
23
Thanks for the tip! Still more than 2x the price of consumer oriented 2.5G switches, but it definitely has an attractive featureset. The price would be a very significant stretch for me though.

Qnap also just announced two new switches, at least one of which seems very well suited for my setup: https://www.techpowerup.com/282806/...ch-qsw-2104-series-featuring-10gbe-and-2-5gbe

They both have 4x2.5G + 2x10G, one SKU is 10G over copper while the other uses SFP+. And they are reasonably priced too! The SFP+ version is 1800SEK while the 10GbE version is 2000SEK, so a bit more than pure 2.5G switches, but still within range.
Well, that type of response is one way to destroy goodwill.

FWIW, I could have resold the tools at cost, I just wanted them out of the house quickly. If fiber is not the right move for you, no worries. However, at present, for high-speed ethernet it is likely less expensive to run fiber than copper if you can use pre-confectioned cable assemblies. If you have to run raw stuff through the walls, termination adds a lot of headache for fiber, less so for copper. But copper is also not very performant in comparison - RJ45 SFP+ transceivers run hotter, are more expensive, have less range, and cost more than optical alternatives.

If you can go SFP+, then twinax DACs are also an inexpensive option for short connections. Switch to NAS, for example.

Good luck with your project.
I understand that a response like that can come off that way, but I also hope you understand the frustration of being on the other end. I first came here with a broad and vague set of questions - which I obviously understand is asking a lot, and can be frustrating to deal with - from which I learned that what I hoped was possible wasn't quite - response that I'm definitely very thankful for and appreciative of! Then I did more research and arrived at what I found to be the currently best compromise solution for my use, asked for some very specific advice concerning that, yet was met with "haven't you asked this before?" and a repeat lecture of the superiority of a technology that I have explained at length why and how isn't really suitable or desirable for my use. Understanding needs to be a two-way street, after all, and advice that doesn't adapt to the conditions in which it will be applied will inevitably be a poor fit. That doesn't mean it (or the effort involved) is not appreciated, but rather that I'm trying to say that this well-meant advice misses the mark. I've tried to explain the why and how of this at length in two threads now, hence my frustration.

Your comment on copper not being very performant illustrates this: I've reiterated several times that I really, really don't have any need whatsoever for 10G speeds. Period. So whether 10G over SFP+ fiber is superior to 10G over copper Ethernet cabling ... doesn't matter. At all. I completely understandthat it is superior, I just will never, ever see any trace of that superiority in practice. Which renders it meaningless. I just want something faster than GbE, and 2.5GbE is likely to be plenty for my use for years to come. I also don't need long cable runs, and 2.5G can easily cover what lengths I need over Cat6. Hence why I don't care about 10G copper performing worse (I'm not going to run 10G copper), nor about it consuming more power (if I need a single specific 10G copper NIC for its 2.5G compatibility (which at this point it doesn't look like I need/want) those few extra watts don't matter). Considering the new Qnap switches above I'm strongly considering getting a cheap SFP+ NIC for the NAS and some cheap ebay cabling to hook that up - the NAS and the switch will be sitting close to each other and relatively hidden, so that's a decent solution even if it will be a bit messy.
See, I do this stuff professionally, but I'm also the guy who's spent countless hours helping newbies get up to speed on FreeNAS, including having written a lot of the resources about networking, 10 gig, etc. I do this because I want people to find success in an area that is admittedly complicated.

Because I have worked with this stuff professionally for many years, and have been working with FreeBSD for many years, I see the totality of the issues involved. So I spent some significant time composing a bespoke response to you that explained what was going on and why 2.5G isn't well-supported, and isn't likely to be well-supported until driver updates arrive, probably in the form of the i219-v driver, which may not be until FreeBSD 13 rolls around.

I am trying to "shove a technically superior but unsuited for [your] use case solution down [your] throat" because it is currently the path to success for what you are trying to do. If you do not like that answer, fine, but there is not really another answer floating around. Do not blame that on me, and also I suggest not being rude by calling it a condescending tone because you're not hearing the thing you want to hear. When we don't have a good answer for people, often we try to put as much raw information out there as possible so that you can have a better understanding of the situation, and maybe salvage something out of the situation, but if you are going to waste time trying to interpret that in a way that you can be offended by it, you have wasted both your time and mine, and I do not appreciate that.
I completely understand where you're coming from, and I really appreciate your efforts, please don't misunderstand that. I also completely understand that there are severe issues (well, that's an understatement) with 2.5G NIC support on TrueNAS - that's the point of this thread after all, asking specific questions about two very specific workaround solutions through using (possibly) supported 10G NICs that also support 2.5G, thus bypassing the issue of non-supported 2.5GbE NICs (which, in light of the hardware availability changes seen since, might not be necessary at all, of course). I'm not calling your response condescending because I don't like the answer, but because your answers insist on highlighting considerations that I've repeatedly said aren't relevant to my use case, such as 10G performance. This is where the "this is technically superior", "yes, but I don't need that and will never notice it" tango we've been doing stems from, after all. I'm calling it condescending because you're failing/refusing to see that my specific needs and your expertise don't align, and instead of adapting the advice given you're insisting on maintaining irrelevant considerations.

I completely understand that my desires are quite radically different from the considerations of someone wanting as much performance as possible, for example. I'm well aware that my desires here are niche and weird, probably especially so in this community, and I'm willing to take the consequences of that, including using odd and technically sub-par hardware combinations if necessary. I've also adapted as I've learnt, changed my ideas of what I ought to do, and rejected my initial thinking. What I'm not willing to do is spend the - quite significant - time and effort required to learn enough to set up an SFP+ networking solution for my apartment when, as I've explained repeatedly, I really, really, really don't need the performance or other technical superiority that delivers. I just want something that a SATA SSD can stretch its legs a bit more through. 10G of any kind, SFP+ or 10GbE, is overkill for that. 5G would be nice, but those speeds aren't realistic outside of large sequential transfers, so it doesn't matter, and hardware doesn't exist anyhow. And I'm not interested in building a highly future-proof setup, as our needs here are extremely unlikely to change much in the next 5+ years. Two users, light usage of the NAS for backups and media storage, with some photo editing. Video editing off the NAS isn't happening - that would require 10G and NVMe storage, so a completely different NAS and a completely different budget range - there aren't more users coming, and given that I can't even think of other workloads where 10G would be relevant speaks to the unlikelihood of those suddenly becoming important. Hence my frustration at repeatedly being told that I ought to go SFP+ due to its technical superiority.

My previous car metaphor still stands: I neither want, need or have a use for anything more than a hatchback, so insisting on the superiority of a van or SUV just fails to take into consideration the circumstances for my choice. If there are no good hatchbacks, then I might need to look at a small station wagon or crossover instead, something that will still fit in my garage, but going all the way over is overkill and, in this metaphor, too expensive and wouldn't fit the garage. (And in the non-metaphor reality, too complex, time-consuming and difficult to implement across the various PCs here, and is unlikely to represent any real-world performance increase to make up for this, even if it might end up slightly cheaper or comparable in cost.)

So while I really appreciate your efforts at responding and explaining your reasoning, I still don't think your overall recommendations are taking into account all the relevant circumstances, and are thus missing the mark, regardless of the intent. Of course starting off presenting it as if I've been duped into believing that 2.5GbE is useful when it is in fact a scam doesn't exactly make you come off as the most well-intentioned or adaptable giver of advice. Well-meant advice delivered in a condescending tone is still condescending, and "you've been duped, here's the real truth" is a condescending rhetorical device, regardless of its truthfulness from your perspective. I've got the exact same bad habit myself, and I know how difficult it is to shake, but making that effort makes a world of difference to the people receiving your expertise.
Not sure that 2.5GB ethernet even makes sense. OP wants a inexpensive solution to interface with a single mirrored pool consisting of two drives and a another standalone pool consisting of a single HDD. The IOPS / transfer are unlikely to reach the limits of 1GbE networking on a sustained basis. Drive limits are likely around 130MB/s and 1GbE network speeds max around 120MB/s.

If on a budget, why bother with 2.5GbE networking if the NAS doesn't feature a SSD pool or a multi-VDEV HDD pool? The added bandwidth is unlikely ever to be noticed. Spend the money on something more productive, like a mirror drive for the media pool.
You seem to have missed some of the changes in my configuration in that thread - my initial questions were based on a misunderstanding of how L2ARC works and what it's useful for. I had initially imagined using my spare 500GB SSD to speed up reads from my mirrored pool. But realizing how that isn't how TrueNAS/ZFS works (thanks to @jgreco!) I've since set it up as a single high-speed SSD-only pool for temporary storage of in-use data that can use the extra bandwidth (which I will keep synced with the mirrored pool manually). That SSD can easily saturate a 2.5G connection, while fulfilling the main desire for more bandwidth: lag-free photo editing off the NAS. As I've said before, that is essentially my only reason for wanting >GbE.
Follow up, as of 12u4, TrueNAS can support the RTL8125 chipset as long as you set two rc.conf tuneables. Not sure why ixSystems announced RTL8125 support yet still requires the two tuneables to be set but such is life. See here.
Thanks for the tip! Things keep changing rapidly, so I'm glad I haven't committed to anything yet. The good thing about this is that I can get one of these NICs locally, test it (even without getting a matching switch), and return it if it doesn't work (unlike with a used Ebay NIC). I'm still on the fence, but at least this option allows for testing things out.

So for now, the options (roughly in order) look like this:
-Buy a ceap RTL8125 NIC and test, if it works, get a pure 2.5G switch (likely TP-Link TL-SG105-M2 or TL-SG108-M2) to match
-Buy a cheap 10G SFP+ NIC, a short twinax DAC (a rather confusing acronym for someone used to that meaning digital-analog converter!), and one of those new Qnap combo switches

The first option is by far the most affordable (only need a single NIC and a switch, I have everything else needed) and easily implemented, so it is a clear first choice, but the second really isn't bad looking either.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,368
Keep in mind that not all of us follow all your posts, hence have no idea of the history.

as for the switches, I’d go SFP+
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
17,172
If that actually works, that'd be pretty cool. We'll have gone from "copper SFP+ is a sucky boondoggle" to "hey here's something useful" in just a handful of years. If anybody tries this and it works, let me know. I'm not burning with curiosity to the level where I'm going to go and fund a bunch of 2.5/5G gear to build a test lab, but I'm certainly fine with adding some notes to the 10 Gig Networking Primer on this...

Even if that doesn't pan out, getting one of the four-port Mikrotik SFP+ switches, which support copper SFP+ modules that do 2.5 and 5GbE speeds, is a plausible fallback position as well.
I haven't personally tested this sort of thing, but STH has. The tl;dr is that there are two generations of SFP+ to 10GBaseT transceiver widely available in a variety of Super China Happy Sun brands, in addition to more reputable brands. The newer generation supports NBase-T if the SFP+ port does (Mikrotik seems to be good about this, and they have decent compatibility matrices).
 

Arwen

Neophyte Sage
Joined
May 17, 2014
Messages
1,405
A comment on 2.5Gbps & 5Gbps Ethernet speeds.

One reason for these, were newer WiFi standards that have a theoretical throughput of greater than 1Gbps, PER band. And we now have 6Ghz band too. So, if all are bursting at full speed to / from wired devices, they are limited to the wired connection that the WiFi access point has configured. In the past, that was 1Gbps Ethernet.

Newer WiFi access points are starting to include 2.5Gbps Ethernet ports because of those wireless speed improvements.

Further, the reason some are copper, is that businesses that wired up hundreds of access points in their buildings with copper Cat 5e or Cat 6, can jump to 2.5Gbps or 5Gbps Ethernet without updating all that wiring. Just replace the switch first, and then replace the WiFi access points as you can.

One place I worked, probably had a 500 access points throughout the campus. I'd guess they were also PoE, (Power over Ethernet), too.
 
Top