It's pretty obvious that consumer facing companies provide drivers only for consumer platforms, and that enterprise ones provide better enterprise platform drivers. I never said I didn't understand
why this is, if I was asking about drivers I was asking
whether they exist and/or work. Figuring out the latter - which is pretty specific information - is quite a lot more challenging than having a general understanding of how markets or driver development works, after all. And looking at this from the outside, there's a rather explcit level of condescention in your response.
As for cabling and signalling standards, we're rapidly running into physical bottlenecks across the board with copper - from PCIe to DisplayPort to Thunderbolt to HDMI, we're rapidly reaching where we need fiber or active cabling for any significant length - so it's not exactly surprising that 10GbE is a lot more challenging than GbE, or that fiber is
better when you want really high bandwidths, especially across longer distances. Again: I don't have an issue understanding that there are inherent limitations and challenges in play here. But I frankly don't see them as very relevant to me. Which I've also been trying to explain to you in the other thread.
A lot of noise has been made about 2.5 and 5G, but the only thing that these really have going for them is that they can support PoE. This is basically a massive swindle by the industry to try to sell everybody on technology that is not a step forward.
So here's the thing. You have bought into a technology being promoted by grifters. They're relying on the fact that the experience people have had with 1G is subpar, which is ironically often because their own 1G technology is subpar. They are now selling you subpar 2.5G or 5G.
Here, as I pointed out in the other thread, we're looking at a pretty major difference in perspective. I'm taking a guess here and saying that you probably work in IT, maybe even with server infrastructure. Whatever the case may be, it's pretty clear that your requirements and mine are worlds apart. I certainly wouldn't describe my experience with 1GbE as sub-par - it's a bit slow, but it's dead stable, plug-and-play, dirt cheap and ubiquitous, lets me make purpose-built cabling with a $10 tool and is easily user serviceable in all relevant ways. Outside of large transfers (which are
rare) and photo editing over the network (which is more frequent, but maybe a monthly activity), GbE would be perfect for my needs. I just want a bit more.
The reason for 1GbE stagnating as the standard for home users is also rather obvious: nothing else has been even remotely necessary for outside of edge cases. Networking for 99.999% of home users means internet access, and internet connections are overwhelmingly <100mbps. It's only in recent years that HDDs are significantly exceeding GbE speeds, SSDs for networked storage are still barely a thing. Fiber internet is bringing faster internet speeds to more people, but most fiber connection are still in the 100-200mbps range. So why change something that works, does everything the
vast majority needs it for, and costs next to nothing? It's been perfectly fine.
Of course, I'm not really happy with GbE. But that is
only down to transfer speeds. It is literally the only thing I want improved. I'm well aware of this being a luxury desire - but given that nGbE is (finally!) proliferating, I was hoping it would be a rather small luxury. So far it's looking a bit worse than I was hoping - but still within the realm of possibility. Now, I understand that 10GbE is inefficient and hot - but I neither want nor need 10GbE. I would literally never make use of the bandwidth. I'm only talking about 10GbE hardware as it seems to be the only way to get anything above GbE on TrueNAS. Of course it might be that a 10GbE NIC in 2.5GbE mode consumes the same amount of power, but a few watts more in one computer hardly matters. Current gen Intel and Realtek 2.5GbE chipsets reportedly barely consume more power than their GbE chipsets. Of course this doesn't help me given that these aren't supported in TrueNAS, but it illustrates how the increased power draw of a (single) 10GbE NIC is something I'm willing to accept if needed. After all, I'm not going to be running massive switches with tons of connections, so a few watts more per 10G port doesn't matter to me. I'm also working within the confines of an ~800ft² apartment, so whether or not a reliable signal can be maintained above, say, 50m with Cat6 cabling is also entirely irrelevant to my use case.
You're talking as if nGbE is useless and the only natural step would be for the consumer world to move to 10GbE fiber-based networking as a next step past GbE. I think you're missing a lot of perspective on just how massive that step is for consumer applications. Backwards compatibility goes out the window. Installation prices shoot up - 2.5GbE chipsets are $3-4, after all. There's a reason why they're found on every $150 motherboard these days. You're not getting 10GbE in that price range, no matter what - that's more of a $400 motherboard feature. Then there's cabling (brittle/fragile, weird new connectors, no end-user termination, large bend radii making for cumbersome installation), not to mention replacing cabling for those who already have Ethernet in their house. And so on and so forth. All the while, the benefits in the end-user space for 10GbE over 2.5 or 5GbE are entirely negligible. Servers and datacenters take whatever bandwidth you can give them and make use of it - that's not the case whatsoever for home users. There's a noticeable difference to me if my NAS can deliver ~220MB/s rather than ~90, but the change from 220 to 900 might not be noticeable at all - my storage isn't that fast, nor is there that much load on the network! And I'm not likely to move to an SSD-based pool any time soon.
For most end users, GbE is still perfectly fine - including crappy Realtek hardware on supported OSes - but some of us are starting to see those transfer speeds as a bottleneck. nGbE, whether 2.5 or 5, is a perfectly logical stepping stone towards improving that - it's well matched with accessible storage today - and is specced to realistically cover our wants and needs for the next decade if not longer. Your attitude is undoubtedly based on a lot of experience, but from what you're saying here, it also fundamentally fails to take into account the actual real-world circumstances of people like me.
I completely understand that we're coming from
very different worlds. That much is clear. It seems that a similar understanding is completely absent in your responses, however. You can reiterate how fiber is technically superior till you're blue in the face - it still won't make it suitable to my use case. It's too large of an investment (either money, time or both), too complicated (I'm already working on one doctorate, I really don't want a second one just to set up a >GbE home network!), requires too much bespoke hardware, and is
really not suited to the hardware I need it for. I made this thread based on looking further into this after our previous discussion, feeling like I'd come as far as I could based on what I'd read. The thread asks a very specific question. Your response is equivalent to me asking "I've decided to buy a small hatchback, should I get a Toyota or Honda?" and you responding with "hatchbacks are crap and a scam, get a van or SUV instead".
I was really hoping to avoid having to reiterate this. Which is why I wrote what I did in the initial post here. 10G SFP+ simply doesn't fit my needs. It's completely excessive. Going Ethernet might have a higher
baseline cost, but that discounts both the work I'd need to do to learn enough to find good deals in the SFP+ world, and the cost difference of buying off-the-shelf parts vs. importing off Ebay here in Sweden (+25% VAT + ~$10-15 processing fees for anything entering from outside of the EU). With Ethernet, I already have the cabling and tools required, know how to deal with every link in the chain, and can buy parts locally. With SFP+ I'd avoid that expensive NIC, but I'd either need to buy (kind of expensive) SFP+-to-Ethernet converters for hooking up the non-SFP+ devices (which would be everything except the NAS), get SFP+ NICs for everything (not an option for my main desktop or the HTPC, both of which are ITX) or buy a $600+ switch that supports both, and either way I'd need to source potentially expensive cabling or import used cabling off Ebay. I'd also need to learn what, to me, looks like a complete mess of compatibility regarding switches, transcievers, cabling standards, and so on. It would undoubtedly net me a technically superior setup -
the benefits of which I won't ever actually see. Instead, I would spend a lot more time and effort (and potentially, though not necessarily, money) on this, rather than just putting together something that fulfills the needs that I have and works with what I've already got.
Hence why I started this thread asking a very specific question: given that only the Intel X550 and various Aquantia NICs support nGbE and have some semblance of FreeBSD support, is there a reason to splurge on the Intel NIC at 2-3x the price of an Aquantia? A 'hey-listen-here-now' half-rant about why SFP+ is superior to Ethernet doesn't help me answer that question - though I guess indirectly you did confirm that Intel drivers are simply more stable (or established/accepted), and that going for Aquantia would be more of a gamble. But that could have been answered in two sentences, without the condescending lecture assuming I have no idea what I'm talking about.