Resource icon

10GBase-T: Best to avoid it if you can

Joined
Jun 15, 2022
Messages
674
No, not OMg. You probably want OM3 or OM4. I really like this particular product:


It's bend insensitive (probably don't take that too literally) and MUCH easier to groom than conventional fiber; both fibers are inside the single sheath.
I was more referring to TCO. When I have a need for speed I'll have to read up on it then probably ask members to bail my --- out of whatever situation I manged to get myself into.

One thing is for sure, the TrueNAS SCALE system I'm putting together can push wayyyy more data than I'm asking of it.
 

kherr

Explorer
Joined
May 19, 2020
Messages
67
All my 10G is fiber ....... switches are a LOT cheaper ...... uses less power ...... and you pay (transceivers) for what you use ......

My 2 cents
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Time for me to (re)read this, and the links therein:
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Browsing through the (very informative) blog posts at FS.com I just found out that there are now Cat 8 copper cables designed for 25GBase-T/40GBase-T. So the misery will not even end with 10G…
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Browsing through the (very informative) blog posts at FS.com I just found out that there are now Cat 8 copper cables designed for 25GBase-T/40GBase-T. So the misery will not even end with 10G…

This isn't new, but I believe it to be a nonstarter designed to sell expensive cable to suckers.

The main problem is that the power issues involved in signalling are going to be awful, and you are probably going to find that it needs to be limited to something substantially less than the classic 100M length of 10/100/1G. Even 10G only works at a distance by burning more watts. So this might end up being a solution for wiring your servers to your ToR switch, someday. But we've seen how well this worked out for 10GBASE-T. We're now ten years into what could/should? have been the heyday of 10G copper.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
This isn't new, but I believe it to be a nonstarter designed to sell expensive cable to suckers.
I did not pretend it was news—just new to me—and certainly did not imply it was a good idea.
But as far as "selling expensive cables to suckers" go, FS does not even register… Look up these "mere" Cat. 7 Diamond cables for audiophiles audiophools, replete with marketing buzzwords and care for details such as the arrow to show the direction of best flow for digital sound over the Ethernet link o_O :
Of course all AudioQuest Ethernet cables honor the directionality inherent in all analog and digital audio cables; arrows on the jackets indicate the direction (from source to destination) for the best audio performance.
Prices are hidden with the PDF catalogue… The "low-end" Pearl is already borderline insane, the top-end Diamond is plainly out of this world—do not check if you have a weak heart, not to mention a weak wallet! :eek:
Capture d’écran 2023-03-12 à 15.39.59.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
But as far as "selling expensive cables to suckers" go, FS does not even register…

You're looking at the wrong thing. Classic error when dealing with sleight of hand. I'm not talking about patch cables. Who gives a $#!+ about patch cables. I'm talking about the big expensive thousand foot spools of plant cable. We've seen this before. the trick is to play up that the next big thing is 100GBASE-T and that you should future-proof your plant by installing Cat9X cable today, spending oodles to do so, but then to have the next thing be a general flop. Meanwhile someone makes a fortune buying Cat9X cable but it turns out in 5 or 10 years that they really should have bought Cat10R cable. I view this as selling expensive cables to suckers. Maybe that's just me.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Look up these "mere" Cat. 7 Diamond cables for audiophiles audiophools, replete with marketing buzzwords
Having recently installed Roon and now hanging around their forums, my mind is blown by how many people are insisting that things in the digital domain (patch cables, power supply for the PC, network switches, using fiber rather than copper for networking, etc.) absolutely affect the sound, even though they can't measure any difference, and unequivocally reject blind A/B testing. But I suppose this is straying from the subject of this thread...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Coming (many years ago) from the medical electronics field where minor irregularities could have outsized effects, I expect that there probably ARE things in the digital domain that can cause some effects on the final result. Just as you can tease out HDMI failures in a variety of ways. As a digital electronics guy for years, I will note that it is easy to become complacent in the mindset that "digital is all or nothing" and indeed it is true that for the most part it either works or fails spectacularly. However, you can get interesting results when you are just on that threshold of getting fail-y. Bit flips, NEXT, etc.

Does it affect the sound? One has to assume that "affects" means changing it in some measurable way, because if it ain't measurable, then it wasn't reproduced differently. Digital signals sent via copper should be identical to digital signals sent via fiber. Or via smoke signal. But corruption of the digital in a way sufficient to "affect" the output without also making it unlistenable seems unlikely.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
One has to assume that "affects" means changing it in some measurable way
...and that's just the thing. The claim is that there are audible changes that can't be, or at least aren't, measured--and on its face, I suppose it's plausible, particularly that a relevant factor might not be understood to be a relevant factor, and therefore might not routinely be measured. But that doesn't explain why people are unwilling to do blind A/B testing. Confirmation bias is a powerful thing.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
...and that's just the thing. The claim is that there are audible changes that can't be, or at least aren't, measured--and on its face, I suppose it's plausible, particularly that a relevant factor might not be understood to be a relevant factor,

But here's where it goes to bullchips. The way that digital audio works is that you take measurements of an analog audio signal using a device known as an ADC (analog to digital converter) which samples the incoming analog waveform at a given sampling rate, and you end up with a series of integer values ("digital") that approximate the analog waveform.

You then convert back into analog using a DAC (digital to analog converter) which accepts a series of digital integer values and generates an analog waveform that is an approximation of the original waveform.

There is lossiness in the ADC process; the conversion is never precise or perfect. There are sampling rates that are commonly understood to be at the limits of the ability of human hearing to discern.

But here's the thing. Once you've performed the ADC process, you are in possession of a list of digital integers. Pumping those integers over the network doesn't change the integers; it doesn't matter if you're putting them into the DAC over fiber or copper or whatever. The same integers go into the DAC, and the DAC will generate the "same" pseudo-copy of the waveform every time it is fed that list of integers.

It's important to note that both the ADC and DAC processes are places where imperfections can (and do) enter the processing. This is why I find it hilarious that people would argue that the media when the signal is carried in a digital format would be a place where it could somehow be subtly damaged; this is a time where you are ONLY able to do stuff like bitflips (which have weird and unpredictable results) or stuff like that. These present themselves as defects in the resulting audio and are generally not subtle artifacts but rather large clicking or popping.

Once you complicate this further by adding audio codecs, it basically gets worse for the impact of bitflips etc.
 

abufrejoval

Dabbler
Joined
May 9, 2023
Messages
20
Well, towards the end, the discussion derailed a bit into audio-cheat cables...

  • I can't fault the logic that copper will eat more energy at higher frequencies and longer distances vs. optics
  • And the overhead of using complex modulations in order to push more data with the same frequencies costs energy and latency, too
That's just physics, I don't argue with physics.

Economy is another matter and for me that includes being lazy and stuff being good enough for my needs.

I still think that the main reason 10Gbit was so crazy expensive initially was more about ASIC vendors wanting to cash in on a unified storage and network fabric and virtualization support: they saw the consolidation all that offered and that directly threatened their bottom line, so they consolidated the functionality into fat ASICs and upped the prices.

But the complexity made for bad quality, ASIC and driver bugs that utterly frustrated players like VMware, who after some years just went software for switches and overlay networks (won't work at 400Gbit, so that's changing again).

So a lot of these ASIC vendors, NIC and switch side, went bust or merged, 10 GBit was largely a wasteland in consumerland and even in data centers a lot of that VM offload stuff remained "dark silicon".

Aquantia & Co. changed that with NBase-T, which came far too late, but it's been around for a while now. And it does change the power game, because it's no longer 10 Watts per port, but around 3 at 10Gbit, lower at 5/2.5/1Gbit or with Green Ethernet on shorter cables: much like laptop CPUs power consumption is now related to usage and not nearly as hot as it used to be.

I remember powering on a 48 port HP 10Gbase-T switch in my office during testing and I had people running in from half the floor, because it sounded like a jet taking off: I think the power supply was rated higher than most 4HE servers and this was a 1HE unit.

Current 8-port NBase-T switches are unnoticeable, even if they use a bit of active cooling, they couldn't do that at 80 Watt; 5-port designs are fully passive and that means less than 10 Watts for all.

And then it changed the price point, €50 per port at retail is still a little high for my taste, but I keep hearing that 1/2.5Gbit ports are €1 for mainboard vendors, not the €25 I pay for a USB NIC. But then I've been running my first NBase-T switches for 8 years without feeling the need to upgrade. That's actually ok, especially if they last me another: I don't think I'll ever need 400Gbit in the home lab.

Copying a couple of 50GB VMs from storage to a workstation or laptop over 1Gbit/s was a bit of a bother, at 1Gbyte/s I'm not missing my flight or overdosing on coffee.

And speaking of laptops, many of those couldn't go beyond 2.5Gbit because USB3 was the best they could do (ok the QNAP 5GBase-T would do something 3.2Gbitish...). Still almost 3x speed was well worth getting €25 USB NICs, yes from RealTek and they do really just work, too! (on Linux)

Now with Aquantia TB adapters they'll also do 10Gbit, just like all those NUCs, which are much less useful for all the compute and storage power they have these days at 1 or 2.5GBit alone. No driver issues, on Linux, every distro with a Kernel 4.9 or newer.

I'm not a network guy. I am a connectivity guy. I've longed for decades to have a simple USB or TB based fabric that just works for all East-West traffic. I keep having blood pressure issues when they talk about 5/10/20Gbit USB whatnot, but only offer 1 or 2.5 Gbit Ethernet: What's the use of all that bandwidth on USB unless you can talk to something else?

It's truly crazy to go Ethernet, when you could go four lane PCIe with a TB cable at €10! Most of the time I don't need to bridge kilometers I just want to build a fault-tolerante hyperconverged cluster from economy components.

And as long as I can't have that, I'll make do with switches and cables that just connect what I have at the fastest bandwidth the device supports, without having to manage the NIC, the GBIC, the optics, the cable and different ones for every speed and length on each side.

I understand I have to make that effort once I leave the "rack" under my desk, but burning a few extra Watts is the fault of those guys who failed to make a USB or TB fabric work, not mine for not going optical on Ether.

I'm not saying that NBase-T or Aquantia is universally good.

But saying it's universally bad is ignoring very valid use cases where they are actually better or just good enough.
 

da_da

Explorer
Joined
Apr 7, 2021
Messages
67
The overall problem I see with copper and MM fiber is that they're like milk and expire sooner than you think. However, using the SM, which may be more costly, it's cheaper on a long run to support any speed you desire.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Current 8-port NBase-T switches are unnoticeable,
As are current 8-port SFP+ switches:
Completely passive, so therefore completely silent.
it changed the price point, €50 per port at retail
The one above is US$32/port. And fully managed.
very valid use cases
I think such things are few and far between. And given that the context of this resource is for file servers--specifically, those running Free/TrueNAS--the "very valid use cases" in that context are non-existent.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Economy is another matter and for me that includes being lazy and stuff being good enough for my needs.

That's fine. But you're also making another error here:

I'm not saying that NBase-T or Aquantia is universally good.

This isn't the SmallNetBuilder forums. It's the TrueNAS Community forums, and what we're talking about here needs to be topical to TrueNAS systems, or if not, should be posted elsewhere or at least in the Off-Topic forum here. The Aquantia is a tripping hazard for use in a TrueNAS system and we don't recommend it. It has all the exciting issues of trying to use an Attansic/Atheros ethernet or other off-brand ethernets where there are weird issues with vlan support, MTU support, etc., and quite a few other oddities. The purpose of these forums is to guide the new users towards whatever we know to work for all use cases under two separate operating systems. If you are happy with your Aquantia, by all means, please feel free to go use it, but it's been troublesome enough in the past for me to have flagged it as a disrecommended chipset, which is pretty noteworthy because I'm a notorious cheapass.

ASIC and driver bugs that utterly frustrated players like VMware, who after some years just went software for switches

Certainly the PSOD issues with the X700 family were famous. As far as I know, the VMware vSwitch stuff has not undergone significant changes in at least a decade. The Standard vSwitch is virtually unchanged other than having gained expandable maxports at some point. The Distributed Switch stuff has evolved somewhat in response to NSX, but that's a complicated play. I'm more of a networking purist and I see more value in virtual functions or (someday) scalable IO virtualization.

around 3 at 10Gbit, lower at 5/2.5/1Gbit or with Green Ethernet on shorter cables:

That is still catastrophically high.

But saying it's universally bad is ignoring very valid use cases where they are actually better or just good enough.

Well, you're welcome to play the TrueNAS Aquantia support guy here in the forums if you like. Otherwise, until this is demonstrated to be stable, reasonable, useful, and advisable, I'm certainly not going to be recommending Aquantia. Until then, Aquantia has the distinction of being classed the same quality as stuff like Realtek or Attansic 1G parts. They're good enough for some applications, but not recommended for TrueNAS.
 

abufrejoval

Dabbler
Joined
May 9, 2023
Messages
20
As are current 8-port SFP+ switches:
Completely passive, so therefore completely silent.
Great, there are more choices today than when I bought my Buffalo and Netgear switches.
But I don't actually need any management on my switches.
In fact I don't want any management on my switches, because I don't get paid for installing security updates and patches: stupid can be good and safer without any attack surface.
But that's a home-lab, which is much more about connectivity than security or managing bandwidth, not what we use at work (where others manage that).
The one above is US$32/port. And fully managed.

I think such things are few and far between. And given that the context of this resource is for file servers--specifically, those running Free/TrueNAS--the "very valid use cases" in that context are non-existent.
The file server appliance may be best served with the optical port.

But again I don't operate file server appliance as a job, it's the clients and the VM hosts which are far more important and those machines are much less likely to have SFP+ ports, especially when they are natively 2.5Gbit or use USB/TB for Ethernet.

Now I could have mixed switches, but that means managing a partitioned population, more complexity and who needs that?

For me NBase-T is about flexibility and ease of use across the large range of systems that I use. Some are pretty powerful workstations, others are modest Atoms still running HCI cluster. The bandwidth range of 1/2.5/5/10 Gbit with a single type of cables and no driver issues is compelling for me. And I just see these Aquantia NICs working perfectly, can't quite see how they "SUUUCK!!" so much.

I won't argue it's the best fit for everyone. But even if I were to rebuild what I operate from scratch today, I think I'd stick with copper cables for the convenience and ease and the fact that it satisfies what I need. And I'd advise it to anyone in a similar situation.

At least until somebody offers me something like a PCIe fabric using Thunderbolt cables, also copper I believe.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Resources have been reorganized and some links are still wonky. This discussion isn't specific to CORE or SCALE; it applies to both about equally since it is a discussion of hardware shortcomings. I can fix the placement of the discussion thread but it isn't clear where it should end up, because of that. Every time something gets reorganized, something new breaks.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The overall problem I see with copper and MM fiber is that they're like milk and expire sooner than you think. However, using the SM, which may be more costly, it's cheaper on a long run to support any speed you desire.
Could you explain your thinking?

Multimode OM3/OM4 fibre, which is the common recommendation for short distance, is rated for up to 100 Gb/s over 70/150 m.
Granted, QSFP+ or QSFP28 transceivers for the common duplex LC connector are over 300 E apiece; the cheap parts for these rates use MTP/MPO-12 connectors, which is not your usual cable, but then prices from fs.com start at 50 E for QSFP+ and 120 E for QSFP28.
But with singlemode OS1/OS2 fibre there is no MTP/MPO, it's all dual LC, and QSFP+/28 modules are over 500 E apiece.

If I consider that 10 Gb/s is "basic speed" (because below that one may as well stay with copper), and accordingly understand "any speed you desire" as "much faster than basic", I do not see how the economics can play for SM over MM.
 
Top