Networking Setup

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Hello,

I'm having a few networking questions:

1. I'm looking to set up a 40GbE setup. Should I go for 40GbE or would it be better to use Dual port 25GbE NIC and aggregate them to make it 50GbE?
2. Can I use the same Fiber Patch cable (LC UPC to LC UPC Duplex OM4) across the 10GbE (SFP+), 25GbE (SFP28), and 40GbE (QSFP+) and 100GbE (QSFP28) or different cables are needed for different SFP variants?
3. For two clients to write 10Gb/s on the NAS, the client should have 10GbE NIC and the NAS should have at least 25GbE right? Or would 10GbE do the job too? I think if the NAS has 10Gb/s, the client would get 5Gb/s. Is that correct?
4. Any reviews on Intel X710/XXV710/XL710 NIC for TrueNAS Core/Scale?

Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Link aggregation does not make 2 x 25GbE into 50GbE. It makes it into 2 x 25GbE, and any particular flow will be limited to no more than 25GbE. The link aggregation only makes it possible to hook the two physical ports up to the same layer 2 network, which is normally a big no-no on ethernet networks. Link aggregation solves that problem.

The type of fiber you need varies; LC/LC OM4 is generally fine for both 10GbE SFP+ or 25GbE SFP28. 40GbE QSFP+ is just quad 10GbE and requires four fibers in each direction, and likewise with 100GbE QSFP28 is just quad 25GbE.

Your networking speeds will be limited to the maximum supportable flow rates. Two clients writing at 10Gbps to a 10Gbps ingress interface will be limited to probably 8 or 9Gbps TOTAL (the perfect 10Gbps is unlikely on a contentious interface), while two writing at 10Gbps to a 25Gbps ingress interface is more likely to get close to the practical limit. Do note that you need a very large amount of CPU and RAM and a properly sized pool in order to handle 10Gbps; you WILL not be getting that if you only have a 4 HDD RAIDZ2 pool and 16GB of RAM.

The Intel 7xx cards are generally very good, some of the most performant cards around. Not expected to be problematic.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Link aggregation does not make 2 x 25GbE into 50GbE. It makes it into 2 x 25GbE, and any particular flow will be limited to no more than 25GbE. The link aggregation only makes it possible to hook the two physical ports up to the same layer 2 network, which is normally a big no-no on ethernet networks. Link aggregation solves that problem.
Umm, so does that mean helping in no downtime or what? Sorry, just trying to understand.

The type of fiber you need varies; LC/LC OM4 is generally fine for both 10GbE SFP+ or 25GbE SFP28. 40GbE QSFP+ is just quad 10GbE and requires four fibers in each direction, and likewise with 100GbE QSFP28 is just quad 25GbE.
So, for the 40GbE QSFP+ and QSFP28, i would need a different patch cable right?

Your networking speeds will be limited to the maximum supportable flow rates. Two clients writing at 10Gbps to a 10Gbps ingress interface will be limited to probably 8 or 9Gbps TOTAL (the perfect 10Gbps is unlikely on a contentious interface), while two writing at 10Gbps to a 25Gbps ingress interface is more likely to get close to the practical limit. Do note that you need a very large amount of CPU and RAM and a properly sized pool in order to handle 10Gbps; you WILL not be getting that if you only have a 4 HDD RAIDZ2 pool and 16GB of RAM.
Of course, of course. I'm aware of that. There are 32x16TB HDD, 512GB, Dual Xeon Platinum. So, what i understand is if two clients want to write 10Gb/s to the NAS, the NAS should have 25GbE NIC so that both clients can get like 7-8Gb/s right? I'm just trying to understand the basics here :)

The Intel 7xx cards are generally very good, some of the most performant cards around. Not expected to be problematic.
Sounds good!

What about the heat? Any report on that?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Umm, so does that mean helping in no downtime or what? Sorry, just trying to understand.

Yes, LACP is great at reducing downtime. All major network connections around here (we're a small scale service provider) are LACP or redundant. Very nice to be able to unplug something to move or groom cables and nothing stops.

So, for the 40GbE QSFP+ and QSFP28, i would need a different patch cable right?

It's a multifiber (eight fiber, four each direction) cable and the pluggable is larger as well.

images


Just for an idea about the "larger" size relative to someone's hand.

So, what i understand is if two clients want to write 10Gb/s to the NAS, the NAS should have 25GbE NIC so that both clients can get like 7-8Gb/s right? I'm just trying to understand the basics here :)

No worries. You sound like you're in the right order of magnitude. I sometimes have to disabuse someone of their screwy notions about putting a 100G into their tiny server and having it be lightning fast.

Even though ethernet is switched these days, it is often difficult to get "peak" performance out of adapters in the ways that users tend to expect. There's some tuning information over in the Resources section that will help you out. Having a decent ethernet chipset is basically a prerequisite, and having some healthy expectations is good too. It's the whole "weakest link in the chain" problem; everything has to be awesome and then you'll get good performance.

When everything is stressed out to the max, such as two 10G clients trying to force feed a 10G ingress on your NAS, there tends to be more contention and that is less likely to work well.

What about the heat? Any report on that?

Yes, the cards run a little warm and you need to ensure airflow over them. A typical rackmount chassis will take care of that for you, but still try to optimize for airflow by selecting a good slot to put your cards in.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
40GbE QSFP+ is just quad 10GbE and requires four fibers in each direction, and likewise with 100GbE QSFP28 is just quad 25GbE.
So, for the 40GbE QSFP+ and QSFP28, i would need a different patch cable right?
Just a note for QSFP: 40GBase-LR and 100GBase-LR cram all four channels into a single fiber, so a duplex singlemode patch cable can carry the whole thing. This is great on the fiber side, because the MPO/MPT connectors are crazy expensive and using up extra fibers is always a pain. The downside is that transceivers can be crazy expensive - but there's a lot of used stuff out there at very low prices (Some months ago, 100GBase-LR QSFP28 modules were selling for under 10 bucks a pop - buy a few extras and you're still below the cost of pretty much any other option. 40GBase-LR is less available and more expensive.).
Small catch: There's a minimal cable run, but the spec is for something like 1.5 m. If you're running such a short cable, DAC might be better. If you're running anything outside of a single rack, you'll easily exceed the 1.5 m anyway.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Yes, LACP is great at reducing downtime. All major network connections around here (we're a small scale service provider) are LACP or redundant. Very nice to be able to unplug something to move or groom cables and nothing stops.
Oh, i get it now. So, that's basically NIC/Network redundancy. I'm getting there :)

It's a multifiber (eight fiber, four each direction) cable and the pluggable is larger as well.

images


Just for an idea about the "larger" size relative to someone's hand.
Got it. Thanks for clearing it up. I really appreciate that :)

No worries. You sound like you're in the right order of magnitude.
Just trying to setup a perfect TrueNAS :)

I sometimes have to disabuse someone of their screwy notions about putting a 100G into their tiny server and having it be lightning fast.
Hehehe. I can understand that very well. Although TrueNAS is still on computers, the root knowledge here is very important if you want to build a robust one, which can not only serve your needs but lasts longer :)

Even though ethernet is switched these days, it is often difficult to get "peak" performance out of adapters in the ways that users tend to expect. There's some tuning information over in the Resources section that will help you out. Having a decent ethernet chipset is basically a prerequisite, and having some healthy expectations is good too. It's the whole "weakest link in the chain" problem; everything has to be awesome and then you'll get good performance.
Yes, i can understand that very well :)

When everything is stressed out to the max, such as two 10G clients trying to force feed a 10G ingress on your NAS, there tends to be more contention and that is less likely to work well.
Yes, so do i understand it right that if two clients want to push 10Gb/s to the NAS, the NAS has to be 25GbE NIC at least. If a 10GbE is installed, both clients will get like 3-5Gb/s approx.

Yes, the cards run a little warm and you need to ensure airflow over them. A typical rackmount chassis will take care of that for you, but still try to optimize for airflow by selecting a good slot to put your cards in.
Sounds good!
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Just a note for QSFP: 40GBase-LR and 100GBase-LR cram all four channels into a single fiber, so a duplex singlemode patch cable can carry the whole thing. This is great on the fiber side, because the MPO/MPT connectors are crazy expensive and using up extra fibers is always a pain. The downside is that transceivers can be crazy expensive - but there's a lot of used stuff out there at very low prices (Some months ago, 100GBase-LR QSFP28 modules were selling for under 10 bucks a pop - buy a few extras and you're still below the cost of pretty much any other option. 40GBase-LR is less available and more expensive.).
Small catch: There's a minimal cable run, but the spec is for something like 1.5 m. If you're running such a short cable, DAC might be better. If you're running anything outside of a single rack, you'll easily exceed the 1.5 m anyway.
Thanks for the info!

I just checked the cables at Fs.com and yes, the price is like 5 times more than normal LC to LC ;)

Having a few questions therefore:

1. What's the difference between MPO and MPT?
2. Like LC to LC cable, the MPO/MPT is already unboot variant right?
3. Like LC to LC Cable, are there any BIF cables out there for MPO/MPT cables?
4. Is QSFP+ older technology than SFP28?
5. As Aqua is best fit, the highest in all, what is it for the MPO/MPT cables?
6. Would it be wise to use SFP28 rather than QSFP+ if one does not care for the pricing?
 
Last edited:

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
@jgreco I was checking the compatibility for the transceivers/modules, the SR optic module. However, i find so many part numbers from Intel's site.
Screen Shot 2023-12-23 at 11.50.28 PM.png


Now, I'm just not sure which one i should go for as every listed optics has the same specs, so I'm not sure what's the difference.

Also, how much ideal distance for SR and LR for the best peak performance?

Secondly, i saw that as per Intel's datasheet for X710-DA2, it has 10/1GbE. Does that mean one port works @10GbE and the other works @1GbE only?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
1. What's the difference between MPO and MPT?
Not something particularly relevant.
4. Is QSFP+ older technology than SFP28?
Yes.
5. As Aqua is best fit, the highest in all, what is it for the MPO/MPT cables?
I assume you're talking about fiber grades? Multimode is up to OM5 these days. Singlemode is OS2 and will likely remain OS2 "forever".
6. Would it be wise to use SFP28 rather than QSFP+ if one does not care for the pricing?
I mean, 25 GbE is less than 40 GbEs, but it's likely to be cheaper overall.
Also, how much ideal distance for SR and LR for the best peak performance?
Ideal distance? There's no such thing. There are minimum cable lengths and maximum cable lengths.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
I assume you're talking about fiber grades? Multimode is up to OM5 these days. Singlemode is OS2 and will likely remain OS2 "forever".
No, talking about cable jacket color.

I mean, 25 GbE is less than 40 GbEs, but it's likely to be cheaper overall.
Makes sense

Ideal distance? There's no such thing. There are minimum cable lengths and maximum cable lengths.
Yeah, i saw some articles about it. It was about range. Seems like i was confused with Fiber Type ;)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
However, i find so many part numbers from Intel's site.

That's because a bunch of different mfrs OEM them for Intel, and they are all unique parts. Most will work with any Intel SFP+ slot. I tend to favor the FTLX8519D3BCV-IT as it is super easy to find on the used market.

Secondly, i saw that as per Intel's datasheet for X710-DA2, it has 10/1GbE. Does that mean one port works @10GbE and the other works @1GbE only?

It means it is a dual speed port, and if you stick a 1GbE optic in, it'll run at 1Gbps. You can even get dual speed optics that will work at both 1 and 10Gbps. Many/most but not all devices will support 1Gbps on an SFP+ port.

Also, how much ideal distance for SR and LR for the best peak performance?

SR and LR refer to the distance; think "SR" = "short range". And by "short range" we mean like up to maybe 200 meters. The spec is technically 300 meters, but you get attenuation at any patch. OTOH you can get 500 meters or more to work....

LR is long range up to 10 kilometers and uses stronger lasers over singlemode fiber. The distance is the primary advantage. You can run a 1 meter LR cable if you want, it's just stupid expensive due to the fiber and optic expense.

Other more esoteric options exist, such as ER which will run 20-40 kilometers (or shorter if attenuators are used).

In general, the ideal distance is "anything that works" and "anything shorter than maybe 75% of the spec limit". Just my opinion. What the hell do I know. Because fiber is light based, it's really just a matter of cramming signal onto a laser at one end and reliably detecting it at the other end. Peak performance in terms of bandwidth is not affected by distance. Peak performance in terms of latency varies with distance and a zero inch long fiber offers "peak performance". :smile:
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
That's because a bunch of different mfrs OEM them for Intel, and they are all unique parts. Most will work with any Intel SFP+ slot. I tend to favor the FTLX8519D3BCV-IT as it is super easy to find on the used market.
That's what i thought. I tried to search the part you mentioned, but couldn't find it.

I prefer to buy a genuine Intel module, not a compatible one with the same part/model number. Mostly, because i heard Intel NIC uses some kind of locks which prevents signal when using something other than Intel's. I'm not 100% sure here, you might have a better clue.

It means it is a dual speed port, and if you stick a 1GbE optic in, it'll run at 1Gbps. You can even get dual speed optics that will work at both 1 and 10Gbps. Many/most but not all devices will support 1Gbps on an SFP+ port.
So, both the ports will do a 10GbE link right?

SR and LR refer to the distance; think "SR" = "short range". And by "short range" we mean like up to maybe 200 meters. The spec is technically 300 meters, but you get attenuation at any patch. OTOH you can get 500 meters or more to work....

LR is long range up to 10 kilometers and uses stronger lasers over singlemode fiber. The distance is the primary advantage. You can run a 1 meter LR cable if you want, it's just stupid expensive due to the fiber and optic expense.

Other more esoteric options exist, such as ER which will run 20-40 kilometers (or shorter if attenuators are used).
Cool Cool

In general, the ideal distance is "anything that works" and "anything shorter than maybe 75% of the spec limit". Just my opinion. What the hell do I know. Because fiber is light based, it's really just a matter of cramming signal onto a laser at one end and reliably detecting it at the other end. Peak performance in terms of bandwidth is not affected by distance. Peak performance in terms of latency varies with distance and a zero inch long fiber offers "peak performance". :smile:
You know a lot that you can guide people into the right direction ;)

Thanks for all the help. Really appreciate that :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I believe it's Dell that infamously makes a card with mixed 1G/10G.
Only the Gen 12/13/14 proprietary "rNDC" form factor comes to mind, where they often paired an I350 with an X520/X540/X710 to have two 1 GbE ports and two 10 GbE ports. I much preferred those, to be honest, because the "modern" alternative is getting crap Broadcom 1 GbE NICs either embedded in the motherboard or otherwise not replaceable (though there are OCP 2.0 or 3.0 slots in addition to those).
 
Top