If I get the Chelsio T520.. can I use both ports

beowu!f

Dabbler
Joined
Oct 3, 2021
Messages
22
Noob with these dual port cards. I wish there was a single port option, but all the 10gig supported cards seem to be dual port at $300+ a pop. I was hoping there would be a supported single port Cat6+ (not SFP+) card for $150 or so. I bought TPLink TX401s.. not realizing there was issues with support. Oh well, I do have 2 Windows computers I plan to have on 10gig anyway so I'll use them there.

I cringe at spending $300+ right now on what was supposed to be a project "use this because its not being used for anything" NAS build. As it is I went nuts on buying 5 16TB X18 HDDs.. "just in case I ever need 50TBs of storage". Primarily bought it for future, but also I want to transcode 4KHDR files to HD264/5 versions for my Plex server as I have kids/family that connect usually via phone or tablet.

Anyway.. so looks like I don't have a choice but to spend the money. Fine.. It will be good right?

So what does 2 ports give me? I am going to be using a Ubiquiti 8 port 10gig aggregator switch. 1 of those goes to my UNVR, 1 to my UDM Pro. 2 ports will go to my 24port switch which has 2 10gig ports. The other 4 are open. I figured only 1 for NAS.. but if there is a chance I can use BOTH ports and get 20gigs.. that would be pretty cool.. though all my other computers/devices are 1gbps max.

So...
* Can I connect both ports to 8 port 10gig aggregate switch?
* If so, what does that do for me?
* Do I have to "link" the two together to get 20gbps (if its possible), or does it allow 10 gig TO the NAS and 10gig FROM the NAS at the same time?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
You can get a dual-port 10G NIC for roughly a tenth of $300 on eBay, $31 for this one:


Not recommending that particular seller; they were just one of the first that popped up after a quick search.

I use these SolarFlare SFN6122F cards and they work well with both FreeNAS/TrueNAS and ESXi. Here's a link to firmware, drivers, and so forth:


If you're leery of flashing a NIC's firmware, or if you absolutely insist on a single-ported NIC, you can get an Intel X520-DA1 on ebay for less than $100, this one is $75:


There are other choices, too, these are just two cards that are known to work with FreeBSD (and therefore FreeNAS/TrueNAS).

You can use both ports on a dual-ported NIC in a LAGG group -- but you won't get 20G; you'll get two 10G pipes. Unless you have a large number of users, LAGG groups generally aren't worth the trouble. You'll never get close to 10G of throughput anyway, because once you move to 10G networking your disk I/O typically becomes the new bottleneck.

Good luck!
 

beowu!f

Dabbler
Joined
Oct 3, 2021
Messages
22
Thank you for that. Good info. Ideally I want CAT6 vs SFP ports. Not sure if the $50 or so for a SFP+ to CAT6A adapter plus cost of card is much less than dual port card (well.. maybe a bit less). Also unclear if they work well. Hate the idea of another failure point.
I am not a big fan of refurbished. Have only had 1 thing out of about 20 the past 10+ years refurbished that worked out well, the rest usually crapped out or didnt work. I love that $30 price though.. but how the heck is that possible.. 10% the price of a good card.. seems almost impossible.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Used ("refurbished" if you prefer) IT gear is pretty thoroughly depreciated, and often sells for a substantial discount vs. new. And electronics typically have a very long lifespan. A year or two ago, I was able to buy a few T420 cards at under $100 each--can't find any now. But just bought one of those Solarflare cards and installed it in my new Microserver Gen10+, and it's working quite nicely.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Thank you for that. Good info. Ideally I want CAT6 vs SFP ports. Not sure if the $50 or so for a SFP+ to CAT6A adapter plus cost of card is much less than dual port card (well.. maybe a bit less). Also unclear if they work well. Hate the idea of another failure point.
I am not a big fan of refurbished. Have only had 1 thing out of about 20 the past 10+ years refurbished that worked out well, the rest usually crapped out or didnt work. I love that $30 price though.. but how the heck is that possible.. 10% the price of a good card.. seems almost impossible.
No idea what type of refurbished stuff you've had bad experiences with... But as @danb35 pointed out above, in the special case of enterprise-class IT gear, perfectly-functioning equipment often sells for a steep discount because big data centers and large companies regularly replace equipment once it's been depreciated. Most of it is sold off by companies that specialize in this market; many of them are on ebay.

I built 3 of my 5 servers (see 'my systems' below) using refurbished' equipment; and even the 'new' servers have refurb components. They work very well indeed.

As always, caveat emptor applies, and you should test refurbished gear within the return/exchange window. But the same warning applies to equipment purchased 'new' as well.

On the question of CAT6 vs. SFP:

SFP is the future; 10Gbase-T -- twisted-pair CAT6 -- has far fewer choices for switches, draws more power, runs hotter, etc. It's on the way out...

In fact, 10Gbase-T runs so hot that I installed a jumper on my 'BRUTUS' server motherboard to disable its built-in 10Gbase-T port, which I'm not using anyway. It ran 10 degrees C cooler afterwards, but is still the hottest component on that system!

10Gbase-T may be a perfect fit for your situation, and what you choose is entirely up to you, of course. I just wanted you to be aware of the overall situation with respect to 10GbE networking.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
in the special case of enterprise-class IT gear, perfectly-functioning equipment often sells for a steep discount because big data centers and large companies regularly replace equipment once it's been depreciated. Most of it is sold off by companies that specialize in this market; many of them are on ebay.

And I've discussed this numerous times in detail. As someone who professionally BUYS used gear, there's a whole bunch of tells.

Here's a little background:

You might look for forum posts from me on "Shenzhen" or "back-alley" if you're interested in other instances of this.

Basically the way this works is that a lot of electronics are produced in Shenzhen, ranging from top-of-the-line to bottom-of-the-barrel. The stuff that is not produced there might still be cloned there. :^)

For stuff that is produced there, not every product is perfectly produced. PCB's can contain errors. Assembled boards can fail testing. Chips can be rejects. These things should technically be destroyed, but sometimes they aren't. So if someone runs off with a pocket full of Intel ethernet controllers, maybe real ones that really work, or failed ones that don't, you can find someplace to make a passable version of the PCB and some child slave labor to solder the bits together, bang it into a box, and sell it to some naive computer parts distributor in North America like NewEgg (I've got knockoff examples sold by NewEgg so I get to use them as a poster child). Worse are all the things "from China" on eBay.

So a few rules to live by.

1) Highly popular profitable devices like Intel X520's, LSI HBA's, etc., are common targets. During their peak sales period, these devices are insanely profitable for their legitimate manufacturers. So they're also profitable to the counterfeiters.

2) Unusual devices like Chelsio cards are UNlikely to be cloned, because the kind of companies buying them are buying direct from major integrators who got their devices direct from Chelsio, not via the channel.

3) Used devices are a crap shoot. Best luck is to be had buying from a company that is clearly and obviously de-racking servers from data centers. That eBay seller with twenty different kinds of server chassis and a hundred of each of them, who's also selling cards and CPU's and memory, that's legit basically 100% of the time. On the other hand, the eBay seller with a hundred X520's but other listings including bulk tennis shoes and children's toys, RUN LIKE HELL.

And here's some basic rules.

It's totally possible to find reputable vendors on eBay, but you have to do a little homework and use your brain a bit. Cards sold by companies registered on eBay for a year or more who are clearly partsing up old data center racks in volume here in the US, with thousands of sales and 99.9%+ reps, are not going to be a problem. Cards sold by kittyboo28314 registered last week in San Jose CA alongside other random non-data-center crap are going to be fakes. There's a slight spectrum in between.

As for 10G networking;

On the question of CAT6 vs. SFP:

SFP is the future; 10Gbase-T -- twisted-pair CAT6 -- has far fewer choices for switches, draws more power, runs hotter, etc. It's on the way out...

I have to say, we were all excited in 2013 when Netgear released the first "affordable" copper 10G switch. Too bad the fans screamed and it was still more than a hundred bucks a port. But it hinted that the decade-long morass that had the world stuck at 1G copper between 2003-2013 was coming to an end.

However, in the years since then, a full eight years now, copper has shown itself to be a pile of bovine excrement, and even the vendors have moved on to try to sell the somewhat more technically feasible 2.5G and 5G copper.

Basically, it's a fool's errand to try to go with category cable. First off, Cat6 is not 10G compliant. A certified Cat6 cable run might be able to run 10G over 30-50 meters, but not the full length. You really need Cat6A or Cat7 for 10G, and then you really also need to get it certified as well, because 10G is more finicky to many of the issues which afflict copper cabling. By way of comparison, pretty much any chump can set up a 10G fiber network. We have a nice sticky that discusses 10 Gig Networking as a Primer.

In a weird way, as a network engineer, I never quite grasped what @Spearfoot just slapped me in the face with ... I've been approaching this as a "copper is not quite ready yet" issue. Having come from the world of POTS and DS1's all the way to gigabit ethernet, I've watched all the technical challenges get resolved for each evolution. However, we do seem to be stuck here, and the physics of it all might just be too much.

There are a few things that have made for the rapid evolution to 1G copper in the '90's-early 2000's. The silicon requirements and the cablemaking were not significant impediments. Data requirements kept going up dramatically. But we reached what appears to be a point of sufficiency. 1G is sufficient to run 4K video, or copy files at a reasonable speed. The explosive growth of servers in the data center was a driver for 1G ethernet in the data center in the early 2000's, making switch silicon port-dense, featureful, and ultimately inexpensive.

But we're not seeing that for 10G. Port density with copper 10G is power-hungry. Besides, servers in the data center went to fiber 10G, and then moved on, many are now 25/40/100G. There's nothing driving widespread adoption of 10G copper. It can't do PoE so it is worthless for wireless access points, one of the few plausible drivers. This means that port costs aren't that likely to come down. Further, it's a real PITA to wire up 10G copper, it's flaky and touchy and more than a bit annoying. You don't need 10G to run 4K video. What's the driver for this?

So hey @Spearfoot I'm going to concede ... the possibility ... that you are right when you say

SFP is the future; 10Gbase-T [...] It's on the way out...

And congratulations, you made me think my way through something today. B-P
 

beowu!f

Dabbler
Joined
Oct 3, 2021
Messages
22
Good stuff both of you. Thank you for all that info.

So.. if SFP is here to stay.. does that mean we'll see computers coming with SFP and CAT soon (e.g. one of each) to start to support SFP setups? As you said.. no real driver to 10G.. what's the driver for SFP to replace 1G in the home? I mean, 8K video is barely here..but it's coming. 8K can stream over 1gig with H265 compression, and I think I read there is an even newer codec in the works that halves it again (or close to it??) can't remember now. But.. I would imagine most people will go to the faster wifi stuff coming out in the next many years. However, maybe I am rare.. but I freaking hate the never ending spinning wheels/waiting/etc that wifi video devices always seem to have.

To that end, I was just thinking to myself (little off topic..but along the lines of video networking..) that if Plex added the ability for their clients to store server video locally if devices that had Plex App available could have SSD drives or something, it would be phenomenal to be able to download the movie locally and play it rather than stream it over the same home network others may be using (and clogging up).

As for my need to 10gig.. well I personally do a variety of things that sometimes entail TBs of data. For example, when I was recording my kids sports, I was using the BMPCC4K camera with BRAW.. about 250GB per hour, and would store a TB at a time (sometimes we had tournaments with 2 games a day, across 3 kids.. so I might have as much as 3TBs or more worth of games over a busy 3 day weekend. WAY WAY overkill.. yes.. but I figured.. I enjoy it and the cheap BMPCC4K had the ability to do so, so why not. So copying files like that is LONG when trying to back up from my workstation (1gig) to the NAS (also 1gig). I didn't want to spend the crazy money on a 10gig Synology setup.. and had this threadripper system just sitting idle for 2+ years now. So I figured.. shoot.. if I have to do transcoding, especially from 4K HDR content, and as well have uses for the server.. I may as well make it handle 10gig content to the NAS since 5 7200rpm drives would likely saturate the 1gig network.

I bought the "cheap" TP Link 10gigs not thinking of driver issues with TrueNAS. So hence I am now looking for a good 10gig PCIe3 card for this server.

SO.. as ALL of my network is 10GBase... if I get the $30 SolarWhatever 10gig card.. which is SFP+.. will any SFTP+ to CAT6A adapter work? Or do you guys recommend specific ones that are good to use over some others?

I am FINE with SFP+ network..that's great. I just am unclear how to then run it to the CAT6A network port on the aggregate switch. That has 8 10gigi CAT6A ports, and 2 SFP+ ports. I saw a demo of setting that up, and they ran the 2 SFP+ ports to the 24port Pro switch (Unifi gear), and had 2 10gig pips from switch to aggregate switch. I think they did link aggregation, which if I understand.. basically gives 2 10gig lines.. so copying 2 files at same time can move at 20gig (total.. 10gig each file) when set up that way.
THUS.. I will ask this stupid question.. does TrueNAS support dual 10gig ports (on a card that has them) such that running both those to two ports on the 10gig aggregate switch.. would that effectively allow me to copy two files at 10gig each (assuming my computer ALSO had dual 10gig ports)? Or is there a lot of issues with making that work?
 

beowu!f

Dabbler
Joined
Oct 3, 2021
Messages
22
Ordered that refurb solarflare.. taking both your guys words for it. :) Now just need the answer on SFP+ to 10gig CAT adapter.. and should hopefully be good to go.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Ordered that refurb solarflare.. taking both your guys words for it. :) Now just need the answer on SFP+ to 10gig CAT adapter.. and should hopefully be good to go.
Well, gosh! You might have been better off getting a 10Gbase-T card! :smile:

Are you already using the SFP ports on your switch? If not, you should be able to connect your server to one of them -- or both, if you want to try out link aggregation. The SolarFlare cards aren't fussy about transceivers, so you can use just about any transceivers you want with an optical cable, or you can use Twinax DAC cables, like these:


If you're stuck with using the copper ports on your switch, there are 10GbaseT-to-SFP+ transceivers you can use, like this one -- which costs twice what you paid for the NIC! You'd plug this gizmo into your NIC, and then run a copper cable from the gizmo to your switch. And it's gonna run hot.


Refer to @jgreco 's 10G primer linked above to wade into the details.

But again... you'll find out that disk I/O becomes the bottleneck once you go 10GbE. Ask me how I know...

Still, it's not uncommon to get 3+ Gbps file transfers, depending on how your FreeNAS/TrueNAS pool is set up and so forth.
 

beowu!f

Dabbler
Joined
Oct 3, 2021
Messages
22
Well, gosh! You might have been better off getting a 10Gbase-T card! :smile:

Are you already using the SFP ports on your switch? If not, you should be able to connect your server to one of them -- or both, if you want to try out link aggregation. The SolarFlare cards aren't fussy about transceivers, so you can use just about any transceivers you want with an optical cable, or you can use Twinax DAC cables, like these:


If you're stuck with using the copper ports on your switch, there are 10GbaseT-to-SFP+ transceivers you can use, like this one -- which costs twice what you paid for the NIC! You'd plug this gizmo into your NIC, and then run a copper cable from the gizmo to your switch. And it's gonna run hot.


Refer to @jgreco 's 10G primer linked above to wade into the details.

But again... you'll find out that disk I/O becomes the bottleneck once you go 10GbE. Ask me how I know...

Still, it's not uncommon to get 3+ Gbps file transfers, depending on how your FreeNAS/TrueNAS pool is set up and so forth.

Right on. So.. I am not entirely sure the best way to go. In a separate thread I may have indicated what I have going on. Basically I built a DIY "box" of wood and put 10u rails on both sides. At the bottom I decided instead of a 4u server rack mount, I'd just mount the old x399 Desginare m/b.. bought a neat little drill tool to drill M3 holes, put in M3 m/b posts, and mounted it up off the wood. Yay me. I also ordered a 5 drive bay hotswap unit (forget th e name but was like $150), and 5 x18 16TB drives. Mounted that (was a pita given little room on the floor with m/b in there).

I set up TrueNAS on on of the 3 NVMe SSDs (only have 1 in system for now.. 950 Pro 500GB). I set up the HDDs in RAIDZ1.. I figure one drive of redundancy is plenty especially because I have a 2nd Synology NAS with 5x8TB HDDs in it.. though I do plan to turn it off and only fire it up when I need to do some backups from this unit. In the 10u DIY rack I will have a UDM Pro, 24port Pro switch (which has 2 10gig and 24 1gig ports), the UNVR for camera recorders, and the 8 port 10gig unifi aggregate switch.

SO.. with that out of the way.. I am absolutely unsure how best to set this all up. From a video (or a couple) I watched, it looks like I run the SFP+ from the UDMP to the 10gig aggregate switch. I run 2 10gig lines from that to the 24port switch via SFP+ cables (I bought 3 SFP+ short cables to do all this). Then, I would run the CAT6A cable to the UNVR (back side of the unit). That is 4 of the 8 ports used. SO.. ALL of these are in the same cabinet as the server.. so running 1 or 2 SFP+ to CAT6A cables from the server to the 10gig aggregate switch is fine. I am good with just the one, as with 4 drives in RAIDZ1 I doubt I can saturate a 10gig connection. The only reason I was asking to run 2.. is since I have 4 ports available, and nothing else is going to use them.. why not.. right? At the very least, I get 2 10gig file transfers at the same time possibly.

So.. when you say "its going to run hot" I assume you mean the SFP+ to CAT6A adapter gets very hot to the touch?

I could potentially run the SFP+ to one of the 24port Pro SFP+?? I am not sure if that is doable.. and if so, if it makes the most sense. OR.. would it be better use of the 2 SFP+ ports on the switch to the 10gig aggregate? 24 x 1gig if ALL 24 devices were some how actively moving files to/from the NAS is the only way I could ever imagine using full 20gigs.. and if I understand you right, that wouldn't matter anyway.. if 24 devices tried to access NAS at same time, I assume there are only the 2 10gig pipes.. they don't transfer multiple files at the same time.. right? So again the only benefit I would get of wiring up 2 ports from NAS to 10gig switch.. is in the off chance I copy two files at once from any of the devices connected to the 24 port switch.. yah?

In the cabinet I have 4 intake fans, 4 exhaust fans (ACFinity), and a controller, and am putting a better fan on the CPU tomorrow. I am hoping that is enough cooling. Right now in my little crappy 5u case there is no cooling at all.

Anyway.. wanted to say thank you regardless. Really appreciate all your help.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So.. if SFP is here to stay.. does that mean we'll see computers coming with SFP and CAT soon (e.g. one of each) to start to support SFP setups? As you said.. no real driver to 10G.. what's the driver for SFP to replace 1G in the home? I mean, 8K video is barely here..but it's coming. 8K can stream over 1gig with H265 compression, and I think I read there is an even newer codec in the works that halves it again (or close to it??) can't remember now. But.. I would imagine most people will go to the faster wifi stuff coming out in the next many years. However, maybe I am rare.. but I freaking hate the never ending spinning wheels/waiting/etc that wifi video devices always seem to have.

If we were going to see widespread adoption of ANY 10G technology for the consumer market, the time for that is now passed, I think.

Vendors saw the increased wifi speeds as an excuse to design and promote 2.5G and 5G copper that did support PoE, and are successfully selling moderate amounts of it. This is in many ways a total scam, because while theoretical WiFi 6/6E speeds are beyond 1G, most installs are not going to see this in practice, just as most AC installs (1.3Gbps theo max) do not end up finding 1G to be a limiting factor. Yes, I realize "beamforming" and all the other upsides make 6/6E more competent than AC. But as a practical matter, 1Gbps wasn't likely to become a SERIOUS problem for access points until the next evolution. So it's mostly a scam to sell new faster-than-1G switching silicon.

If we had seen things like the trashcan Mac Pro adopt 10G copper in 2013, that might have pushed the issue. However, right now, what we're seeing is some tepid adoption of 2.5/5G chipsets on high end mainboards. This is eating 10G copper adoption at both ends, because 2.5/5 isn't compatible with 10G copper, so anyone who bought into 10G copper switchgear isn't going to get any advantage from a 2.5/5G client, and people who buy into 2.5/5G switchgear are buying into a dead end ecosystem.

It's possible that we'll see a "different" 10G copper standard evolve from 2.5/5 at some point, because theoretically there is some room for improvement there, but this is just going to be a trainwreck from a compatibility point of view.

Also, at this point, 10G is a 20 year old technology. We've been stuck at 1G copper for many years, which has been "sufficient" for many uses. With a 5x increase to 5G, I'm having trouble seeing 10G as a plausible next step. We used to do order-of-magnitude upgrades (1M/10M/100M/1G/10G), but a mere doubling of speed (5G->10G) is less compelling from an upgrade perspective, especially with compatibility issues. But there really are some practical limits to copper, and it might be that we've arrived at the end of the RJ45 train line.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Right on. So.. I am not entirely sure the best way to go. In a separate thread I may have indicated what I have going on. Basically I built a DIY "box" of wood and put 10u rails on both sides. At the bottom I decided instead of a 4u server rack mount, I'd just mount the old x399 Desginare m/b.. bought a neat little drill tool to drill M3 holes, put in M3 m/b posts, and mounted it up off the wood. Yay me. I also ordered a 5 drive bay hotswap unit (forget th e name but was like $150), and 5 x18 16TB drives. Mounted that (was a pita given little room on the floor with m/b in there).

I set up TrueNAS on on of the 3 NVMe SSDs (only have 1 in system for now.. 950 Pro 500GB). I set up the HDDs in RAIDZ1.. I figure one drive of redundancy is plenty especially because I have a 2nd Synology NAS with 5x8TB HDDs in it.. though I do plan to turn it off and only fire it up when I need to do some backups from this unit. In the 10u DIY rack I will have a UDM Pro, 24port Pro switch (which has 2 10gig and 24 1gig ports), the UNVR for camera recorders, and the 8 port 10gig unifi aggregate switch.

SO.. with that out of the way.. I am absolutely unsure how best to set this all up. From a video (or a couple) I watched, it looks like I run the SFP+ from the UDMP to the 10gig aggregate switch. I run 2 10gig lines from that to the 24port switch via SFP+ cables (I bought 3 SFP+ short cables to do all this). Then, I would run the CAT6A cable to the UNVR (back side of the unit). That is 4 of the 8 ports used. SO.. ALL of these are in the same cabinet as the server.. so running 1 or 2 SFP+ to CAT6A cables from the server to the 10gig aggregate switch is fine. I am good with just the one, as with 4 drives in RAIDZ1 I doubt I can saturate a 10gig connection. The only reason I was asking to run 2.. is since I have 4 ports available, and nothing else is going to use them.. why not.. right? At the very least, I get 2 10gig file transfers at the same time possibly.

So.. when you say "its going to run hot" I assume you mean the SFP+ to CAT6A adapter gets very hot to the touch?

I could potentially run the SFP+ to one of the 24port Pro SFP+?? I am not sure if that is doable.. and if so, if it makes the most sense. OR.. would it be better use of the 2 SFP+ ports on the switch to the 10gig aggregate? 24 x 1gig if ALL 24 devices were some how actively moving files to/from the NAS is the only way I could ever imagine using full 20gigs.. and if I understand you right, that wouldn't matter anyway.. if 24 devices tried to access NAS at same time, I assume there are only the 2 10gig pipes.. they don't transfer multiple files at the same time.. right? So again the only benefit I would get of wiring up 2 ports from NAS to 10gig switch.. is in the off chance I copy two files at once from any of the devices connected to the 24 port switch.. yah?

In the cabinet I have 4 intake fans, 4 exhaust fans (ACFinity), and a controller, and am putting a better fan on the CPU tomorrow. I am hoping that is enough cooling. Right now in my little crappy 5u case there is no cooling at all.

Anyway.. wanted to say thank you regardless. Really appreciate all your help.
Oh, you're using Ubiquiti switches? I don't know anything about them... You may need to browse their forum for help with setting them up.

(And yes, I meant that the transceiver gets hot to the touch.)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The Ubiquiti switches are basically "cloud" style managed switches that integrate into Ubiquiti's Unifi controller, and are generally unremarkable otherwise. If you are managing a Ubiquiti site, they might make sense because you can manage things from a single pane of glass. You can get versions that support PoE for Unifi AP's which is kinda nice.
 

beowu!f

Dabbler
Joined
Oct 3, 2021
Messages
22
Oh, you're using Ubiquiti switches? I don't know anything about them... You may need to browse their forum for help with setting them up.

(And yes, I meant that the transceiver gets hot to the touch.)
So.. is the transceiver getting hot a problem? I would assume many people use these things to connect SFP+ to network ports on switches? In my cabinet I have 4 ACFinity intake fans at the bottom, and 4 exhaust at the top.. I am hoping once I close it off it will provide enough air flow to help keep things cool. For now the front area will be open.. just going to attach a screen "door" to try to keep most dust out until I have the time to build a door for the front. So the air flow may not matter too much right now. But.. that said.. you brought up the point about it being hot.. why did you say so? Is that a concern with regards to the heat it puts out.. or that it can get hot and melt/die/cause problems?

I REALLY like the idea of using one of the 2 24port 10gig SFP+ ports to connect directly to the NAS. I would be fine with that route if that works. I will find out if UDMPro should go to 24port switch or 10gig aggregate switch.

REally hope this all works so that I see 10gig speeds! Though.. the bottleneck now is that my house is all 1gig.. so I may have to run a 10gig wire from wherever I end up putting this network rack to my office so my workstation can make use of those speeds.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So.. is the transceiver getting hot a problem? I would assume many people use these things to connect SFP+ to network ports on switches?

No. For the reasons outlined in my tangential posts about the evolution of 10G, it turns out that virtually NO one is using 10G copper at scale (the industry word for "high density") in the data center. It's a power hungry watt blasting proposition with no benefits, much higher latency, and general suckiness all around.

The SFP+ form factor is spec'd for 1.5W of power, and 10GBASE-T requires more than that. There have been some vendors building switches that can provide more power than that, but when you add another 1.5W of power to an already power-hungry 48 port 1U switch, that's just creating more room for failure.
 

beowu!f

Dabbler
Joined
Oct 3, 2021
Messages
22
No. For the reasons outlined in my tangential posts about the evolution of 10G, it turns out that virtually NO one is using 10G copper at scale (the industry word for "high density") in the data center. It's a power hungry watt blasting proposition with no benefits, much higher latency, and general suckiness all around.

The SFP+ form factor is spec'd for 1.5W of power, and 10GBASE-T requires more than that. There have been some vendors building switches that can provide more power than that, but when you add another 1.5W of power to an already power-hungry 48 port 1U switch, that's just creating more room for failure.

Ok.. let me rephrase that. :D. In my hobbyist/home use.. since I am not sure if I can use the SFP+ port on the switch directly to the NAS.. assuming I can not.. is me using that adapter going to be a problem.. given that the NAS itself won't be used much.. mostly for streaming some videos from time to time, sometimes copying files to or from it? Is there concern the heat it generates can ruin the SFP+ port on the NIC (or the CAT port on the 10gig switch)? Will the heat it generates be so much that it can cause problems in the cabinet?

I hear you and agree with the use in data centers.. I would assume fiber would be better anyway what I know of it. Anytime we can cut down on waste heat, and from what I think I understood you guys to say, it's also cheaper/better deal to go SFP+ anyway.. all the better.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We're talking a watt or two of extra power (and therefore heat). One of the issues, though, is whether or not a random SFP+ port can sink that load, given the 1.5W design spec. Unknown. The cynic in me says "don't try."
 
Top