10 Gig Networking Primer

10 Gig Networking Primer

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Was wondering, I have a Asus P8B WS motherboard and the only PCIe 2.0 slots that are available are two PCIe 2.0 x16 (@ x4). Will those be fast enough for a card in each of them?
The SolarFlare SFN6122F cards need 8 lanes of PCIe 2.0 (5.0GT/s). If I'm reading your question right and your Asus systemboard only as 4 lane slots available, then they won't work at full speed on your system.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The SolarFlare SFN6122F cards need 8 lanes of PCIe 2.0 (5.0GT/s). If I'm reading your question right and your Asus systemboard only as 4 lane slots available, then they won't work at full speed on your system.

That may be sort-of true, and to be a bit clearer, 5.0GT/s works out to 4Gbit/sec, so at 4 lanes on the mainboard, a dual 10G card is probably is going to be able to hit the limit of the PCIe bus if you try hard.

But what's the point of worrying about it? You're limited by the board. The only way to bump the speed is to toss it out of an airplane at 30,000 feet. You'll get that extra speed boost, at least for a few minutes, try it if you don't believe me. ;-)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
"That may be sort-of true..."
What part of my statement was false? :smile:

The board would work with fewer than 8 lanes -- at something less than its max capability -- and things being the way they are, that might be plenty fast enough for most real-world use-cases. But SolarFlare claims it needs 8 if you want the full 10G mojo:
sfn6122f.jpg
 

ianrm

Dabbler
Joined
Aug 22, 2020
Messages
27
Hi,
just upgraded to U5 and found that my 10gb card no longer works,
Sad.
Ian
After much saying many bad things, I have the 10gb card back online. I had to use a console to correct the configuration , none of the NICs would connect.
 
Last edited:

heisian

Dabbler
Joined
Oct 3, 2020
Messages
21
I just finished my FreeNAS build and setup today. I used an old Mini-ITX board that I had used as a macOS (hackintosh) media computer, and am currently running 4 2TB disks in Raid 1+0 mode (2 mirrored vdevs in my zpool). Unfortunately the board doesn't support ECC memory (it does, but will not run them in ECC mode), so eventually I will look for one that does.

For now, though, I'm getting pretty good speeds, between 70-90 MB/s - but of course, the world is not enough for me, and so I naturally clicked on this 10G primer.

It seems like one big takeaway from the original post is don't do 10GBaseT copper (at least, if you want to save money).

So with that in mind, and embracing SFP+ and all its unfamiliar tech, would you all think the following equipment could get me up and running with 10G speeds?

MikroTik 5-port switch
https://www.amazon.com/MikroTik-CRS305-1G-4S-Gigabit-Ethernet-RouterOS/dp/B07LFKGP1L
4 SFP+ ports, 1 RJ45 port
SFP+ ports can route to workstations + NAS
RJ45 port can route to internet modem/router

SFP+ Network Card for each server/station
https://www.amazon.com/TRENDnet-Standard-Low-Profile-Brackets-TEG-10GECSFP/dp/B01N4FYWUN/

SFP+ Direct-Attach Active Optical Cable (for longer runs up to 49ft)
https://www.amazon.com/Macroreer-Active-Optical-Cable-SFP-10G-AOC10M/dp/B07SKFRP9H

SFP+ Direct-Attach Passive Copper Cable (for shorter runs up to ~20ft)
https://www.amazon.com/10G-SFP-DAC-Cable-SFP-H10GB-CU0-5M/dp/B01M09C9NZ

Am I missing anything here?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I just finished my FreeNAS build and setup today. I used an old Mini-ITX board that I had used as a macOS (hackintosh) media computer, and am currently running 4 2TB disks in Raid 1+0 mode (2 mirrored vdevs in my zpool). Unfortunately the board doesn't support ECC memory (it does, but will not run them in ECC mode), so eventually I will look for one that does.

For now, though, I'm getting pretty good speeds, between 70-90 MB/s - but of course, the world is not enough for me, and so I naturally clicked on this 10G primer.

It seems like one big takeaway from the original post is don't do 10GBaseT copper (at least, if you want to save money).

And keep your hair (if you have any).

SFP+ Network Card for each server/station
https://www.amazon.com/TRENDnet-Standard-Low-Profile-Brackets-TEG-10GECSFP/dp/B01N4FYWUN/

SFP+ Direct-Attach Active Optical Cable (for longer runs up to 49ft)
https://www.amazon.com/Macroreer-Active-Optical-Cable-SFP-10G-AOC10M/dp/B07SKFRP9H

SFP+ Direct-Attach Passive Copper Cable (for shorter runs up to ~20ft)
https://www.amazon.com/10G-SFP-DAC-Cable-SFP-H10GB-CU0-5M/dp/B01M09C9NZ

Am I missing anything here?

I have no idea if the Trendnet card works well for Windows. Don't expect it to work well for FreeNAS, if at all. The straightforward choices for FreeNAS are an Intel X520-{DA,SR} card or the Chelsio card that iX uses.

The DAC cables are generally a bad idea in my opinion. eBay is flush with cheap used optics and fiber is cheap.
 

heisian

Dabbler
Joined
Oct 3, 2020
Messages
21
And keep your hair (if you have any).



I have no idea if the Trendnet card works well for Windows. Don't expect it to work well for FreeNAS, if at all. The straightforward choices for FreeNAS are an Intel X520-{DA,SR} card or the Chelsio card that iX uses.

The DAC cables are generally a bad idea in my opinion. eBay is flush with cheap used optics and fiber is cheap.

OK it looks like there are some $45 X520-DA1 cards on eBay, which is even better news for me.

Why is DAC a bad idea? Not that I have or need to, just curious.

Going all optical seems better to me, will check eBay, thank you!
 

kspare

Guru
Joined
Feb 19, 2015
Messages
507
OK it looks like there are some $45 X520-DA1 cards on eBay, which is even better news for me.

Why is DAC a bad idea? Not that I have or need to, just curious.

Going all optical seems better to me, will check eBay, thank you!
It’s not a bad idea. DAC cables *can* have an advantage with latency. There is no light conversion taking place at each end. Other than that, they work the same.

Depending on your rack, once the mmf fiber is in the gbic, they can stick out a lot farther than a dac cable..
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
It’s not a bad idea. DAC cables *can* have an advantage with latency. There is no light conversion taking place at each end.

Uhhhhhh... what?

SFP+ is generally lower latency than DAC.

Mellanox-chart2.png

Other than that, they work the same.

Depending on your rack, once the mmf fiber is in the gbic, they can stick out a lot farther than a dac cable..

That part is sorta true, but you shouldn't be making harsh bends in DAC cables.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
507

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681

This flies in the face of more than 10 years of conventional wisdom, so I'm going to call it bull**** unless we see some other major vendors showing the same results. Someone tried the same argument years ago with GBIC/XENPAK vs CX4 and it simply didn't pan out in copper's favor.

Even if it were true, the difference between fiber and DAC claimed by Arista is so small as to be almost meaningless. Optoelectronics are the primary thing powering terabit ethernet (note: not actually terabit, just an industry term for faster-than-100Gbps) though I'm still waiting for 100Gbase-SX to make an appearance, as of now we're pretty much stuck with QSFP-style multiple parallel channels to get us to those speeds.

In addition, it only applies to passive DAC, and DAC is still a loser in terms of cable routing and manageability.

https://extranet.www.sol.net/files/misc/biffiber.jpg

View attachment 27835

That 100G stuff at the end is as thin as spaghetti and four of them in a bundle, which is what I run to hypervisors for our 4x10G connectivity, is thinner when bundled than a single conventional Cat5 cable, and ridiculously flexible (well ok I cut my teeth with horribly bend-sensitive easily-breakable fiber). Another upside to it is that I can order it to length, so if I want a 1.44m fiber patch, I can have it. Try that with DAC cables.

Your DAC cables suck. :smile:
 

kspare

Guru
Joined
Feb 19, 2015
Messages
507
This flies in the face of more than 10 years of conventional wisdom, so I'm going to call it bull**** unless we see some other major vendors showing the same results. Someone tried the same argument years ago with GBIC/XENPAK vs CX4 and it simply didn't pan out in copper's favor.

Even if it were true, the difference between fiber and DAC claimed by Arista is so small as to be almost meaningless. Optoelectronics are the primary thing powering terabit ethernet (note: not actually terabit, just an industry term for faster-than-100Gbps) though I'm still waiting for 100Gbase-SX to make an appearance, as of now we're pretty much stuck with QSFP-style multiple parallel channels to get us to those speeds.

In addition, it only applies to passive DAC, and DAC is still a loser in terms of cable routing and manageability.

https://extranet.www.sol.net/files/misc/biffiber.jpg

View attachment 27835

That 100G stuff at the end is as thin as spaghetti and four of them in a bundle, which is what I run to hypervisors for our 4x10G connectivity, is thinner when bundled than a single conventional Cat5 cable, and ridiculously flexible (well ok I cut my teeth with horribly bend-sensitive easily-breakable fiber). Another upside to it is that I can order it to length, so if I want a 1.44m fiber patch, I can have it. Try that with DAC cables.

Your DAC cables suck. :smile:

Lets agree to disagree lol, They both work well for 10/40gb.

With the constant monkeying we do with our freenas boxes, I do also prefer dac, so that i'm not exposing a fiber end that may need to be cleaned/polished to ensure it's working optimally. I actually went and got myself fiber certified and learned quite a bit about the maintnence... until we get these boxes running how we want and we're not constantly unplugging them, we'll stick with dac. if you never touch them, fiber is definitely a nice way to go.
 

heisian

Dabbler
Joined
Oct 3, 2020
Messages
21
Interesting paper - it does look like we’re only talking about a few nanoseconds difference, either way.

From someone who does almost no networking besides home stuff, it sounds like a heavy but ductile cable vs a very light but somewhat brittle cable is a consideration.

I have to admit the incredibly thin 100G cable is quite enticing, sexy, even.

Currently I just have my cat6 cable hanging around in some places - sounds like I’ll need to properly route and protect my line if going w/ fiber.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I have to admit the incredibly thin 100G cable is quite enticing, sexy, even.

I should probably clarify that the cable is theoretically rated for 100G (OM4 cable) based on the improved optical characteristics, but no one is pushing it that far yet. In fact, everything beyond 10G has been a trainwreck because no one was willing to do the hard work.

We got to 40GbE by bundling 4x10GbE lanes (40GBASE-SR4 with MPO connectors) and this was a relatively easy evolution due to QSFP+ -- hopefully it is easy to understand that sticking four 10G lanes into a single connector/cable is not a huge advance of any sort.

We have seen 4x25G and 2x50G inside of QSFP28, 8x50G in QSFP-DD 400GBASE-SR8, and CFPx stuff also tops out at 2x50G but I'm having trouble thinking of anything that's been faster than 50G on a MMF strand. When you look at the long haul single mode fiber (SMF) stuff, you get into WDM and all that.

Unfortunately, once you move beyond 10G, it gets increasingly more expensive to go optical. The QSFP* DAC cables tend to be a lot cheaper as you get up there in speed. The real problem, though, is that there are so many things to choose from, and so much incompatible hardware that has been deployed.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
507
I should probably clarify that the cable is theoretically rated for 100G (OM4 cable) based on the improved optical characteristics, but no one is pushing it that far yet. In fact, everything beyond 10G has been a trainwreck because no one was willing to do the hard work.

We got to 40GbE by bundling 4x10GbE lanes (40GBASE-SR4 with MPO connectors) and this was a relatively easy evolution due to QSFP+ -- hopefully it is easy to understand that sticking four 10G lanes into a single connector/cable is not a huge advance of any sort.

We have seen 4x25G and 2x50G inside of QSFP28, 8x50G in QSFP-DD 400GBASE-SR8, and CFPx stuff also tops out at 2x50G but I'm having trouble thinking of anything that's been faster than 50G on a MMF strand. When you look at the long haul single mode fiber (SMF) stuff, you get into WDM and all that.

Unfortunately, once you move beyond 10G, it gets increasingly more expensive to go optical. The QSFP* DAC cables tend to be a lot cheaper as you get up there in speed. The real problem, though, is that there are so many things to choose from, and so much incompatible hardware that has been deployed.
Very true. for my 10gb DAC I use fiber store or Cisco. For my 40gb it has to be strictly Cisco cables.
 

Parzival30

Cadet
Joined
Sep 18, 2020
Messages
3
Does anyone have "2x Mellanox MCX311A-XCAT CX311A ConnectX-3 EN Network Card 10GbE SFP+ Cable" these cards and setup? I just setup my TrueNAS 12 server and am transferring a ton of data over. Trying to figure out the best way to do this across the 10gb connection. When I log into my QNAP NAS and test with HSB3, I can establish the connection with the TrueNAS server, and test the transfer speed, but only getting 150-220MB/s transfer rates. Can anyone help me with this? Thanks!
 

Borja Marcos

Contributor
Joined
Nov 24, 2014
Messages
125
That part is sorta true, but you shouldn't be making harsh bends in DAC cables.
A big advantage of passive DAC cables is lower power consumption as there is no optical conversion involved. Also, optical transceivers can eventually wear out.

Disadvantage of passive DAC: Short runs, but it can be great to connect to a top-of-the-rack switch.

But yes, you are absolutely right. DAC cables are twinaxial and they can be delicate. A sharp bend will completely destroy cable performance.
 

Revolution

Dabbler
Joined
Sep 8, 2015
Messages
39
I'm in search for 2 (win10 + truenas) 10gbe card from the recommendation but I'm not sure where to look in particular. Most of the ebay offerings on the german ebay website are fake x520 cards and I don't know how to see if a Mellanox Connect-X2/3 is fake or not. Can someone point me in the right direction where to look / what to look out for so I don't buy counterfeit products.

Thanks!
 
Top