Intel Xeon D-1540 - network speed limits for 10Giga-bit ports

Status
Not open for further replies.

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Looking at the specifications for the Xeon D-1540, (really the whole D-15xx series), it
appears this is the network port configuration;
  • Integrated Platform Controller Hub - dual ports of 10/100/1000 Mega-bits per second
  • Product family LAN controller - dual ports of 1/10 Giga-bits per second
The 2 x 1/10 Giga-bit ports don't support 10Mega-bits or 100Mega-bits per second speeds.

Thus, this seems why SuperMicro either supplies a board with an additional dual port chip
of 10/100/1000 Mega-bits per second AND dual port of 1/10 Giga-bits per second. Or just
the dual 10/100/1000 Mega-bits per second ports and no 10 Giga-bit networking.

Here are some references, IPCH first, (page 55), then the 1/10 Giga-bit next, (page 21);

http://www.intel.com/content/www/us/en/processors/xeon/xeon-d-1500-datasheet-vol-1.html
http://www.intel.com/content/www/us/en/processors/xeon/xeon-d-1500-datasheet-vol-4.html

For me at home, I could live with just the supplied ports and no additional dual port network
chip. As long as 1 port supplied 100Mega-bit per second speed, I'd select that port to be a
management port. Still probably running at Giga-bit speeds, but a bad cable that dropped
one of the pairs may still work at 10/100 Mega-bit speeds until it could be replaced.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The 2 x 1/10 Giga-bit ports don't support 10Mega-bits or 100Mega-bits per second speeds.
That's typical of 10GbE NICs, from what I understand. Many 10Gb copper switches are similar.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's typical of 10GbE NICs, from what I understand. Many 10Gb copper switches are similar.

As are many 10G SFP+ switches. I can stick 1G SFP's in but nothing lower. It's wonderful to have shiny 10G switchgear and then have a janky old 1G last-decade switch in the rack too.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Well, the Sun / Oracle hardware that has 10GBase-T on the system board,
has a chip set that supports 100Mbps/1Gbps/10Gbps.

For the SFP+, that makes sense that they could support SFPs, (for 1Gbps).
Using fiber for 100Mbps is not that common today, and the old transcievers
were much larger, generally using SC connectors.

In someways, I wish the standards supported the 4 pairs as individual sets.
For example;

1 pair - 250Mbps, or 2.5Gbps
2 pair - 500Mbps, or 5Gbps
3 pair - 750Mbps, or 7.5Gbps
4 pair - 1Gbps, or 10Gbps

Thus, loss of 1 pair would only loose 25% capacity. The channel would
simply re-negotiate to the reduced bandwidth, and monitoring software
would alarm on the partial path failure. (I say path because it could be
cable, or the socket at either end, or the transciever chip, etc...)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, the Sun / Oracle hardware that has 10GBase-T on the system board,
has a chip set that supports 100Mbps/1Gbps/10Gbps.

See, that kinda makes sense on a system board. There I could even see dropping the 100Mbps; who the heck has a 100Mbps network anymore? :smile: The part that makes it a real annoyance is when they limit the switches.

What I usually need is something like a bunch of 10G ports and then a few ports for management gear like RPDU's (10Mbps or 10/100Mbps), iKVM (10/100), etc. I probably don't mind burning a few pricey 10G ports if it means I can get by without another switch to worry about.

For the SFP+, that makes sense that they could support SFPs, (for 1Gbps).
Using fiber for 100Mbps is not that common today,

ONLY because 1GbE fiber is cheap and ubiquitous. The whole reason SFP came about was that it was frustrating to have a chassis switch like the Cisco 65xx with GBIC's. You'd get about 16 GBIC's per card but you could do 48 SFP's per card. At the time the 6500 series was introduced (~2000?) it was very common to have 100Mbps networks with 100M uplinks back to an agg switch like the Cat.

and the old transcievers
were much larger, generally using SC connectors.

Yes, and moving away from that towards SFP generally increased port density by ~~3x.

In someways, I wish the standards supported the 4 pairs as individual sets.
For example;

1 pair - 250Mbps, or 2.5Gbps
2 pair - 500Mbps, or 5Gbps
3 pair - 750Mbps, or 7.5Gbps
4 pair - 1Gbps, or 10Gbps

Thus, loss of 1 pair would only loose 25% capacity. The channel would
simply re-negotiate to the reduced bandwidth, and monitoring software
would alarm on the partial path failure. (I say path because it could be
cable, or the socket at either end, or the transciever chip, etc...)

Nice, but can you imagine the interop issues and complexity that this would generate?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
In someways, I wish the standards supported the 4 pairs as individual sets.
For example;

1 pair - 250Mbps, or 2.5Gbps
2 pair - 500Mbps, or 5Gbps
3 pair - 750Mbps, or 7.5Gbps
4 pair - 1Gbps, or 10Gbps

Thus, loss of 1 pair would only loose 25% capacity. The channel would
simply re-negotiate to the reduced bandwidth, and monitoring software
would alarm on the partial path failure. (I say path because it could be
cable, or the socket at either end, or the transciever chip, etc...)
...
Nice, but can you imagine the interop issues and complexity that this would generate?
Actually yes, I can image the nightmare of compatibility, monitoring and potential for
asymetrical bandwidth.

For example, what if pairs 1 & 2 work fine both ways, but pairs 3 & 4 only work in one
direction? We get 500Mbps or 5Gbps one way, and 1Gbps or 10Gbps the other way. If
the monitoring software on the server said all was good at 1Gbps receive but a SysAdmin
called out in the middle of the night, could not figure out why out-bound traffic, (like a
backup), was bottled necked...

So network drivers would have to announce;

eth0 link failure
eth0 link up, re-negotiating
eth0 negotiaged 1Gbps receive, 500Mbps transmit

The reason I suggest this, is that some environments have 4 VLANs for normal servers;
  1. Backup, (and potentially network boot for loading and recovery)
  2. OS management, (DNS, NTP, outgoing E-Mail, SSH, etc...)
  3. Ingress data
  4. Egress data
The last 2 may use LACP, (or Solaris IPMP), for reliability and performance. Loose 1 port,
no real problem. So a total of 6 ports in use.

But, the first 2 using a single port is generally fine, UNTIL you use some chassis server.
All the main vendors, HP, Sun/Oracle, IBM, Cisco have them. Then you HAVE to use LACP
or IPMP on all ports. Otherwise we need an outage window to replace the builtin Ethernet
switch/pass-through devices.

Normally re-homing a network cable on a switch for a single server connection is not a
problem. Until you have to replace the entire switch, as in the case of these server blade
chassis. Then all blades are affected. And in some cases, you can't wait because if a blade
is down, (without backups, you may not want production data updates), you have to
change the chassis network module ASAP. Possibly affecting all blades backups and OS
management during the changeout.

Sorry for rambling on, but I have had lots of stuff bite me hard, (and yes I have the teeth
marks to prove it :smile:, due to network issues.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For example, what if pairs 1 & 2 work fine both ways, but pairs 3 & 4 only work in one
direction?

Magic copper! :smile:

Normally re-homing a network cable on a switch for a single server connection is not a
problem. Until you have to replace the entire switch, as in the case of these server blade
chassis. Then all blades are affected. And in some cases, you can't wait because if a blade
is down, (without backups, you may not want production data updates), you have to
change the chassis network module ASAP. Possibly affecting all blades backups and OS
management during the changeout.

Sorry for rambling on, but I have had lots of stuff bite me hard, (and yes I have the teeth
marks to prove it :), due to network issues.

Yeah, I've been building redundant networks since the '90's and totally avoided stuff like blades. Too much stuff wrapped up in a failure-prone package. Much prefer individual servers, and everything tends to get done in twos or threes.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Magic copper! :)
...
I was waiting for someone to say that :).

It could be the Ethernet transceiver chip with a bad transmiter or
receiver. For Gigabit and 10Gbps, they use each pair in both directions.
I've never seen that type of failure, but if it affected pair 1 or 2, (used
by 10/100Mbps), then we would have total link failure.

There HAVE been cases I've seen when a port, (mostly switch ports),
won't properly work at Gigabit speeds. But, will work at 10/100Mbps
speeds. The cable does test good. So perhaps this is an example of
partial transceiver failure.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's often a bad jack. As you note, 10/100 only use 1/2/3/6, but gig uses all eight. A bad connection on 4/5/7/8 will often look like some strange negotiation issue and everything works fine if you lock in at 10 or 100.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
That's often a bad jack. As you note, 10/100 only use 1/2/3/6, but gig uses all eight. A bad connection on 4/5/7/8 will often look like some strange negotiation issue and everything works fine if you lock in at 10 or 100.
Yes, I have actually seen a bent pin in a 8p8c jack on a Sun SPARC Station 5. Of course to
affect a SS5, it would have to be pair 1 or 2, (it's system board Ethernet port was 10/100Mbps
only).

Fortunantly it was both easy to fix temporarily, and under contract for the more permanant fix
of mother board replacement. When I left it working with the user, I told him not to un-plug the
cable until the field engineer came by with the replacement.
 
Status
Not open for further replies.
Top