jgreco
Resident Grinch
- Joined
- May 29, 2011
- Messages
- 18,680
This document is now available in the Resources section! Use the tabs above to navigate to the document itself.
The current version also follows inside the spoiler tag:
Also, if you'd like to repost this elsewhere, I'd appreciate it if you would have the courtesy to ask. And to credit the source.
The current version also follows inside the spoiler tag:
This is a discussion of 10 Gig Networking for newcomers, with specific emphasis on practical small- to medium-scale deployments for home labs or small office users. It originated with a forum thread located here that has many pages of additional information and discussion.
History
In the early 1990's, we had 10Mbps ethernet. In 1996, 100Mbps ethernet. In 1999, gigabit ethernet. In 2002, 10 gigE. About every three or four years, an order of magnitude increase in network speeds was introduced. In all cases but the last, within about 5 years of introduction, reasonably priced commodity gear became available for that technology. We stalled out with 10G because the technology became more difficult. Copper based 10G wasn't practical at first. Further, and perhaps unexpectedly, it seemed that gigabit was actually finally sufficient for many or even most needs.
LACP for gigabit
A lot of people have worked around the lack of 10gigE with Link Aggregation, that is, using multiple gigabit connections from a server to a managed switch. Unfortunately, the load balancing protocols used do not provide a standard way to balance traffic to a small number of destinations, such as what often happens with fileservers. LACP kind of sucks for NAS.
10 gigabit Ethernet technologies
There's a bunch, but let's stick to the practical stuff you might actually want to use, and leave out stuff like XFP or CX4. The modern stuff you want to use boils down to just two: SFP+ and 10GBASE-T.
SFP+ is a technology that allows an engineer to put one of several different modules into a switch or network interface card. An evolution of older SFP technology, it is usually backwards-compatible with SFP. An SFP+ module is essentially a transceiver that talks to the switch and requires a "device driver" of sorts, so you need an SFP+ module that is compatible with the switch or ethernet adapter (some vendors also engage in vendor lock-in requiring you to use their own branded modules). This kind of sucks, but once you get over that hurdle, you have lots of options and it is incredibly flexible. SFP+ is available in various flavors. The ones you're likely to use:
SR optics (the word we usually use to refer to optical SFP modules) are short range, and when used with OM3 (aqua colored) or OM4 fiber, can run for up to 300 meters. This is usable for any distance shorter than 300 meters.
LR optics are longer range, and when used with the proper singlemode fiber, can run up to 10 kilometers.
These are laser based products and you should not look into the optics or ends of the fiber. ;-)
Also available are direct-attach SFP+'s, where two SFP+ modules have been permanently connected together via twinax cable. These are essentially patch cables for SFP+. The downside is that sometimes you run into compatibility issues, especially when you have two SFP+ endpoints from different manufacturers who both engage in vendor lock-in. The upside is that sometimes they're cheaper and (maybe?) more durable than optics and fiber, where you need to not be totally stupid and careless with kinking the fiber.
Originally, no SFP+ modules were available for 10GBASE-T. The wattage requirements for 10GBASE-T exceeds what's available for the SFP+ port specification. There are SFP 1GBASE-T modules for gigabit however. More recently, some SFP+ manufacturers have created low power 10GBASE-T SFP+'s with limited link distance. Because the power required increases with distance, this may be an option. Unfortunately, these still seem very expensive.
From a cost and performance aspect, SFP+ has somewhat lower latency and reduced power consumption compared to 10GBASE-T.
One of the biggest caveats here, though, is that once you go down the SFP+ path, you probably want to stick with it. There's no easy switching away from it except to do a forklift upgrade. (But don't feel bad, SFP+ marks you as a diehard networker.)
The Intel X520 card is an example of an SFP+ card, which is available in one and two port configurations, and -DA (direct attach) and -SR (short range) optic variants. The difference between -DA and -SR is simply that the -SR will include Intel SR optics.
10GBASE-T is the copper 10G Ethernet standard. Much more familiar to most end users, this uses RJ45 modular connectors on Category 6 or better cable. Category 6 will typically reach up to around 50 meters. This was basically a worthless standard up until recently, when several manufacturers have started to create less-expensive switches that support 10GBASE-T. Probably the most notable of these is the Netgear ProSafe XS708E, available for $800-$900.
I believe that 10GBASE-T will ultimately be the prevailing technology, but it is very much the VHS (ref VHS-vs-Betamax) of the networking world. It is an inferior technology that burns more power and makes more compromises. Most of the deployed 10G networking out there today is still NOT copper, so for the next several years, at least, the best deals on eBay are likely to be for SFP+ or other non-copper technologies.
What Do You Need?
While it is tempting to think of your entire network as 10 gigabit, in most cases this is at least a several thousand dollar exercise to make happen, factoring in the cost of a switch, ethernet cards, and wiring.
There are some alternatives. One easy target is if gigabit is acceptable for your endpoints (PC's and other clients), it is not that hard to find a gigabit switch with several 10G uplinks. The cheapest decent one I've seen is probably the Dell Networking 5500 series, such as the 5524, often available for around $120 (2018) on eBay. That model comes with two 10G SFP+ slots, which could be used for a FreeNAS box and a workstation at 10G, while also allowing all remaining stations to share in the 10G goodness. Now that it's 2016 we're also seeing the Dell Networking N2024, which is an entry-level Force10 based switch. If you don't mind eBay for all purchases, you can get a basic 10G setup for your NAS and one workstation for less than $500. These are both fully-managed full-feature "enterprise" switches.
We recently debated another alternative, which is to abuse the FreeNAS box itself as a bridge using FreeBSD's excellent bridging facility. This is very cost-effective but has some caveats ... primarily that you need to be more aware that you've got a slightly hacked-up configuration. Since modern ethernet technologies are fully capable of point-to-point operation, without a switch, clients can be hooked up directly to the server (via a 10Gbase-T crossover cable, or SFP+). The simple case of a single workstation hooked up to the server via a direct cable is fairly easy. Multiple workstations might involve bridging. If you wish your clients to receive Internet connectivity, that's more complicated as well.
In 2013, Netgear introduced a few new 10GBASE-T switch options including the ProSafe XS708E which offers 8 ports for a cost around $100 per port.
The Dell PowerConnect 8024F is often available on eBay for around $400, offering a mix of SFP+ ports along with four 10GBASE-T. This is probably the cheapest option to get 10gigE for a NAS or two, some ESXi boxes, and then a few runs of 10GBASE-T for workstations.
A variety of new entrants exist from Ubiquiti etc, and now that it's 2018/2019 there are some more affordable options. Notably, MikroTik now has the CRS305-1G-4S+IN which is a 4-SFP+ and 1Gbps copper 5 port switch that's very inexpensive and looks like a real contender for smaller home networks, less than $150 new. MikroTik also offers the CRS309-1G-8S+IN which is an 8-SFP+ and 1Gbps copper 9 port switch for $270. Both of these products are reported to perform poorly IF you use their advanced routing features, but are reported to do wirespeed layer 2 routing just fine. I have not used either of these personally, but they're pretty compelling.
What Card Do I Pick?
This forum has been very pro-Intel for gigabit cards, because historically they've "just worked." However, for 10gigE, there have been some driver issues in versions of FreeNAS prior to 9.3 that lead to intermittent operation. Additionally, the Intel adapters tend to be rather more expensive than some of the other options. 10gigE is not in high demand, so often some of the niche contenders have products that may, counterintuitively, be very inexpensive on the used market. These cards may be just as good a choice - if not better - than the Intel offerings. We're Intel X520 here but the following notes are gathered from forum users.
@depasseg and I note: Intel X520 and X540 are supported via the ixgbe driver. Intel periodically suffers from knockoff cards in the new and used markets. There should be a Yottamark sticker on it that'll help authenticate the card as genuine. Check the country, datecode, and MAC address Yottamark gives you, don't just blindly trust it. Not a good choice if you wish to run versions prior to 9.3. https://bugs.freenas.org/issues/4560#change-23492 Also note that there's been a variety of problem reports with the X540 and TSO.
@Mlovelace, @depasseg, and @c32767a note: Chelsio is iXsystems' card of choice. @Norleif notes that the S320E-SR-XFP can sometimes be found for less than $100 on eBay. The Chelsio T3, T4 and T5 ASICs are fully supported by the current version of FreeNAS and are the cards shipped for 10gigE if you buy a TrueNAS system. iXsystems: "FreeNAS 9.2.1.5 supports the Chelsio T3 with the cxgb driver. It supports the T4/T5 with the cxgbe driver. We keep these drivers current with the latest code from the vendor. By far and away the Chelsio cards are the best FreeBSD option for 10Gbe." Also note that the S310E only supports PCIe 1, so speeds may be limited especially in an x4 slot. @Mlovelace also has found a great vendor for generic Chelsio SFP+ optics.
@depasseg and @c32767a note: SolarFlare: Some users recommend the SFN 5162F. @jgreco notes he just got four SolarFlare SFN6122F on eBay for $28/each, with SR optics (3/2019). This is awesome for ESXi as the SolarFlare burn half the watts of the Intel X520's.
@Norleif reports: IBM NetXTREME II 10GBit (Broadcom BCM 57710) Works in FreeNAS 9.3, can sometimes be found for less than $100 on eBay.
@Borja Marcos notes: Beware the Emulex "oce" cards - serious issues with them, panics when moving some traffic. There is a patch (see relevant discussions on the freebsd-net mailing list) but the stock driver crashes badly.
Notes on Fiber
A home user won't need anything other than short range ("multimode") fiber, which runs anywhere within a few hundred feet if done right. The ins and outs of other types of fiber is too complex for this forum.
Fiber is graded similarly to category cable, where you have Cat3 (10Mbps), Cat5 (100Mbps), Cat6 (1Gbps), etc. In fiberspeak these are "OM". OM1 and OM2 are older standards, typically clad with an orange jacket. These are actually probably fine for short runs of 10Gbps, up to a few dozen feet, depending on whose specs you believe. However, 10Gbps is laser light while older standards were LED, and to maintain signal integrity, aqua-colored OM3 was introduced, and then also OM4, to get to 100Gbps speeds.
Fiber is sensitive to bending, as the transmission of light is dependent upon optical tricks to keep the signal integrity. Do not make sharp bends in fiber.
Some newer fiber is referred to as "bend insensitive fiber" but the word "insensitive" really means "less-sensitive". At the same time, some BIF is being put into a single jacket, which renders it incredibly flexible and easy to work with. Check out this image for a comparison of 1Gb copper, 10Gb OM3, and 100Gb BIF OM4.
The BIF OM4 in that image was sourced from fs.com, OM4-ULC-DX, and can be ordered in custom lengths at a very good price. This is not an endorsement, I get no kickbacks, etc. I just found it to be an amazing product.
But I Want 10G Copper
Understandably so. Or, rather, you THINK you want it. (If you don't, skip this section!) The 1G copper you're used to has a lot of upsides including the ability to field terminate. However, it is also very tricky to work with at 10G speeds. We got from 10Mbps to 100Mbps by moving from Cat3 to Cat5 wire, which increased the bandwidth that could be handled by improving the RF characteristics of the wire. The change from 100Mbps to 1Gbps was accomplished by some more modest cabling improvements, using all four pairs bidirectionally, using echo cancellation and 4D-PAM-5 modulation, which is really pushing the boundaries of what's reasonable on copper.
To get to 10G on copper is a lot harder. There has to be a tenfold increase in capacity. We already burned through the additional two pairs, AND went to bidirectional, at the 100-1G transition. In order to get another 10X multiplier, there are basically only three things we can do: one, slightly better cabling. This is an incremental improvement, unlike the jump from Cat3->Cat5. Two, better modulation and encoding. Three, use more power, a side effect of better modulation and encoding. There's a nice slideshow that goes into the details if you're interested. This means that cabling becomes ever more fiddly and hard to work with.
But here's the way this works out for you, the FreeNAS user who doesn't have a 10G network, and wants one.
There's a depressingly small amount of 10GBASE-T stuff on the market. If you buy it, it'll probably have to be new. It'll be expensive. It'll be power-hungry. This stuff only became vaguely affordable around 2013-2014, and hasn't sold well. It doesn't work over existing copper, unless your existing copper plant was already wired to an excessive standard like CAT6A. It has done so badly in the marketplace that manufacturers came out with fractional upgrades, 2.5GBASE-T and 5GBASE-T, that are eating away at some of the markets that might have driven 10GBASE-T. If you try to run 10GBASE-T, you'll probably need new cabling. There are a small number of 10G copper network cards out there, most of which you'll need to buy new, because no one used these in the data center.
By way of comparison, you can go out and get yourself an inexpensive 10G SFP+ card with SR optics for about $100, and a Dell 5524 switch for about $100 as well. This works swimmingly well, without drama. This stuff has been in the data center for more than 15 years, and people nearly give away their "old" stuff.
There's also been some excitement about SFP+ 10GBASE-T modules. Don't get excited. The realities of SFP+ mean that these modules can never work well. Most devices will not have the correct "drivers" to drive a copper SFP+ module. Those that do are likely to find themselves limited by cable length, as the SFP+ form factor only provides 2.5W *MAX* per SFP+, which is far below the 4-8 watts it may take to drive 10GBASE-T at full spec. Even if you only need to go 1 meter, the copper SFP+'s are expensive with relatively low compatibility. So in general these are nowhere near as useful or usable as we'd like.
As usual, this post isn't necessarily "complete" and I reserve the right to amend it and/or delete, integrate, and mutilate the reply thread as I see fit in order to make this as useful as possible.
History
In the early 1990's, we had 10Mbps ethernet. In 1996, 100Mbps ethernet. In 1999, gigabit ethernet. In 2002, 10 gigE. About every three or four years, an order of magnitude increase in network speeds was introduced. In all cases but the last, within about 5 years of introduction, reasonably priced commodity gear became available for that technology. We stalled out with 10G because the technology became more difficult. Copper based 10G wasn't practical at first. Further, and perhaps unexpectedly, it seemed that gigabit was actually finally sufficient for many or even most needs.
LACP for gigabit
A lot of people have worked around the lack of 10gigE with Link Aggregation, that is, using multiple gigabit connections from a server to a managed switch. Unfortunately, the load balancing protocols used do not provide a standard way to balance traffic to a small number of destinations, such as what often happens with fileservers. LACP kind of sucks for NAS.
10 gigabit Ethernet technologies
There's a bunch, but let's stick to the practical stuff you might actually want to use, and leave out stuff like XFP or CX4. The modern stuff you want to use boils down to just two: SFP+ and 10GBASE-T.
SFP+ is a technology that allows an engineer to put one of several different modules into a switch or network interface card. An evolution of older SFP technology, it is usually backwards-compatible with SFP. An SFP+ module is essentially a transceiver that talks to the switch and requires a "device driver" of sorts, so you need an SFP+ module that is compatible with the switch or ethernet adapter (some vendors also engage in vendor lock-in requiring you to use their own branded modules). This kind of sucks, but once you get over that hurdle, you have lots of options and it is incredibly flexible. SFP+ is available in various flavors. The ones you're likely to use:
SR optics (the word we usually use to refer to optical SFP modules) are short range, and when used with OM3 (aqua colored) or OM4 fiber, can run for up to 300 meters. This is usable for any distance shorter than 300 meters.
LR optics are longer range, and when used with the proper singlemode fiber, can run up to 10 kilometers.
These are laser based products and you should not look into the optics or ends of the fiber. ;-)
Also available are direct-attach SFP+'s, where two SFP+ modules have been permanently connected together via twinax cable. These are essentially patch cables for SFP+. The downside is that sometimes you run into compatibility issues, especially when you have two SFP+ endpoints from different manufacturers who both engage in vendor lock-in. The upside is that sometimes they're cheaper and (maybe?) more durable than optics and fiber, where you need to not be totally stupid and careless with kinking the fiber.
Originally, no SFP+ modules were available for 10GBASE-T. The wattage requirements for 10GBASE-T exceeds what's available for the SFP+ port specification. There are SFP 1GBASE-T modules for gigabit however. More recently, some SFP+ manufacturers have created low power 10GBASE-T SFP+'s with limited link distance. Because the power required increases with distance, this may be an option. Unfortunately, these still seem very expensive.
From a cost and performance aspect, SFP+ has somewhat lower latency and reduced power consumption compared to 10GBASE-T.
One of the biggest caveats here, though, is that once you go down the SFP+ path, you probably want to stick with it. There's no easy switching away from it except to do a forklift upgrade. (But don't feel bad, SFP+ marks you as a diehard networker.)
The Intel X520 card is an example of an SFP+ card, which is available in one and two port configurations, and -DA (direct attach) and -SR (short range) optic variants. The difference between -DA and -SR is simply that the -SR will include Intel SR optics.
10GBASE-T is the copper 10G Ethernet standard. Much more familiar to most end users, this uses RJ45 modular connectors on Category 6 or better cable. Category 6 will typically reach up to around 50 meters. This was basically a worthless standard up until recently, when several manufacturers have started to create less-expensive switches that support 10GBASE-T. Probably the most notable of these is the Netgear ProSafe XS708E, available for $800-$900.
I believe that 10GBASE-T will ultimately be the prevailing technology, but it is very much the VHS (ref VHS-vs-Betamax) of the networking world. It is an inferior technology that burns more power and makes more compromises. Most of the deployed 10G networking out there today is still NOT copper, so for the next several years, at least, the best deals on eBay are likely to be for SFP+ or other non-copper technologies.
What Do You Need?
While it is tempting to think of your entire network as 10 gigabit, in most cases this is at least a several thousand dollar exercise to make happen, factoring in the cost of a switch, ethernet cards, and wiring.
There are some alternatives. One easy target is if gigabit is acceptable for your endpoints (PC's and other clients), it is not that hard to find a gigabit switch with several 10G uplinks. The cheapest decent one I've seen is probably the Dell Networking 5500 series, such as the 5524, often available for around $120 (2018) on eBay. That model comes with two 10G SFP+ slots, which could be used for a FreeNAS box and a workstation at 10G, while also allowing all remaining stations to share in the 10G goodness. Now that it's 2016 we're also seeing the Dell Networking N2024, which is an entry-level Force10 based switch. If you don't mind eBay for all purchases, you can get a basic 10G setup for your NAS and one workstation for less than $500. These are both fully-managed full-feature "enterprise" switches.
We recently debated another alternative, which is to abuse the FreeNAS box itself as a bridge using FreeBSD's excellent bridging facility. This is very cost-effective but has some caveats ... primarily that you need to be more aware that you've got a slightly hacked-up configuration. Since modern ethernet technologies are fully capable of point-to-point operation, without a switch, clients can be hooked up directly to the server (via a 10Gbase-T crossover cable, or SFP+). The simple case of a single workstation hooked up to the server via a direct cable is fairly easy. Multiple workstations might involve bridging. If you wish your clients to receive Internet connectivity, that's more complicated as well.
In 2013, Netgear introduced a few new 10GBASE-T switch options including the ProSafe XS708E which offers 8 ports for a cost around $100 per port.
The Dell PowerConnect 8024F is often available on eBay for around $400, offering a mix of SFP+ ports along with four 10GBASE-T. This is probably the cheapest option to get 10gigE for a NAS or two, some ESXi boxes, and then a few runs of 10GBASE-T for workstations.
A variety of new entrants exist from Ubiquiti etc, and now that it's 2018/2019 there are some more affordable options. Notably, MikroTik now has the CRS305-1G-4S+IN which is a 4-SFP+ and 1Gbps copper 5 port switch that's very inexpensive and looks like a real contender for smaller home networks, less than $150 new. MikroTik also offers the CRS309-1G-8S+IN which is an 8-SFP+ and 1Gbps copper 9 port switch for $270. Both of these products are reported to perform poorly IF you use their advanced routing features, but are reported to do wirespeed layer 2 routing just fine. I have not used either of these personally, but they're pretty compelling.
What Card Do I Pick?
This forum has been very pro-Intel for gigabit cards, because historically they've "just worked." However, for 10gigE, there have been some driver issues in versions of FreeNAS prior to 9.3 that lead to intermittent operation. Additionally, the Intel adapters tend to be rather more expensive than some of the other options. 10gigE is not in high demand, so often some of the niche contenders have products that may, counterintuitively, be very inexpensive on the used market. These cards may be just as good a choice - if not better - than the Intel offerings. We're Intel X520 here but the following notes are gathered from forum users.
@depasseg and I note: Intel X520 and X540 are supported via the ixgbe driver. Intel periodically suffers from knockoff cards in the new and used markets. There should be a Yottamark sticker on it that'll help authenticate the card as genuine. Check the country, datecode, and MAC address Yottamark gives you, don't just blindly trust it. Not a good choice if you wish to run versions prior to 9.3. https://bugs.freenas.org/issues/4560#change-23492 Also note that there's been a variety of problem reports with the X540 and TSO.
@Mlovelace, @depasseg, and @c32767a note: Chelsio is iXsystems' card of choice. @Norleif notes that the S320E-SR-XFP can sometimes be found for less than $100 on eBay. The Chelsio T3, T4 and T5 ASICs are fully supported by the current version of FreeNAS and are the cards shipped for 10gigE if you buy a TrueNAS system. iXsystems: "FreeNAS 9.2.1.5 supports the Chelsio T3 with the cxgb driver. It supports the T4/T5 with the cxgbe driver. We keep these drivers current with the latest code from the vendor. By far and away the Chelsio cards are the best FreeBSD option for 10Gbe." Also note that the S310E only supports PCIe 1, so speeds may be limited especially in an x4 slot. @Mlovelace also has found a great vendor for generic Chelsio SFP+ optics.
@depasseg and @c32767a note: SolarFlare: Some users recommend the SFN 5162F. @jgreco notes he just got four SolarFlare SFN6122F on eBay for $28/each, with SR optics (3/2019). This is awesome for ESXi as the SolarFlare burn half the watts of the Intel X520's.
@Norleif reports: IBM NetXTREME II 10GBit (Broadcom BCM 57710) Works in FreeNAS 9.3, can sometimes be found for less than $100 on eBay.
@Borja Marcos notes: Beware the Emulex "oce" cards - serious issues with them, panics when moving some traffic. There is a patch (see relevant discussions on the freebsd-net mailing list) but the stock driver crashes badly.
Notes on Fiber
A home user won't need anything other than short range ("multimode") fiber, which runs anywhere within a few hundred feet if done right. The ins and outs of other types of fiber is too complex for this forum.
Fiber is graded similarly to category cable, where you have Cat3 (10Mbps), Cat5 (100Mbps), Cat6 (1Gbps), etc. In fiberspeak these are "OM". OM1 and OM2 are older standards, typically clad with an orange jacket. These are actually probably fine for short runs of 10Gbps, up to a few dozen feet, depending on whose specs you believe. However, 10Gbps is laser light while older standards were LED, and to maintain signal integrity, aqua-colored OM3 was introduced, and then also OM4, to get to 100Gbps speeds.
Fiber is sensitive to bending, as the transmission of light is dependent upon optical tricks to keep the signal integrity. Do not make sharp bends in fiber.
Some newer fiber is referred to as "bend insensitive fiber" but the word "insensitive" really means "less-sensitive". At the same time, some BIF is being put into a single jacket, which renders it incredibly flexible and easy to work with. Check out this image for a comparison of 1Gb copper, 10Gb OM3, and 100Gb BIF OM4.
The BIF OM4 in that image was sourced from fs.com, OM4-ULC-DX, and can be ordered in custom lengths at a very good price. This is not an endorsement, I get no kickbacks, etc. I just found it to be an amazing product.
But I Want 10G Copper
Understandably so. Or, rather, you THINK you want it. (If you don't, skip this section!) The 1G copper you're used to has a lot of upsides including the ability to field terminate. However, it is also very tricky to work with at 10G speeds. We got from 10Mbps to 100Mbps by moving from Cat3 to Cat5 wire, which increased the bandwidth that could be handled by improving the RF characteristics of the wire. The change from 100Mbps to 1Gbps was accomplished by some more modest cabling improvements, using all four pairs bidirectionally, using echo cancellation and 4D-PAM-5 modulation, which is really pushing the boundaries of what's reasonable on copper.
To get to 10G on copper is a lot harder. There has to be a tenfold increase in capacity. We already burned through the additional two pairs, AND went to bidirectional, at the 100-1G transition. In order to get another 10X multiplier, there are basically only three things we can do: one, slightly better cabling. This is an incremental improvement, unlike the jump from Cat3->Cat5. Two, better modulation and encoding. Three, use more power, a side effect of better modulation and encoding. There's a nice slideshow that goes into the details if you're interested. This means that cabling becomes ever more fiddly and hard to work with.
But here's the way this works out for you, the FreeNAS user who doesn't have a 10G network, and wants one.
There's a depressingly small amount of 10GBASE-T stuff on the market. If you buy it, it'll probably have to be new. It'll be expensive. It'll be power-hungry. This stuff only became vaguely affordable around 2013-2014, and hasn't sold well. It doesn't work over existing copper, unless your existing copper plant was already wired to an excessive standard like CAT6A. It has done so badly in the marketplace that manufacturers came out with fractional upgrades, 2.5GBASE-T and 5GBASE-T, that are eating away at some of the markets that might have driven 10GBASE-T. If you try to run 10GBASE-T, you'll probably need new cabling. There are a small number of 10G copper network cards out there, most of which you'll need to buy new, because no one used these in the data center.
By way of comparison, you can go out and get yourself an inexpensive 10G SFP+ card with SR optics for about $100, and a Dell 5524 switch for about $100 as well. This works swimmingly well, without drama. This stuff has been in the data center for more than 15 years, and people nearly give away their "old" stuff.
There's also been some excitement about SFP+ 10GBASE-T modules. Don't get excited. The realities of SFP+ mean that these modules can never work well. Most devices will not have the correct "drivers" to drive a copper SFP+ module. Those that do are likely to find themselves limited by cable length, as the SFP+ form factor only provides 2.5W *MAX* per SFP+, which is far below the 4-8 watts it may take to drive 10GBASE-T at full spec. Even if you only need to go 1 meter, the copper SFP+'s are expensive with relatively low compatibility. So in general these are nowhere near as useful or usable as we'd like.
As usual, this post isn't necessarily "complete" and I reserve the right to amend it and/or delete, integrate, and mutilate the reply thread as I see fit in order to make this as useful as possible.
Also, if you'd like to repost this elsewhere, I'd appreciate it if you would have the courtesy to ask. And to credit the source.