Intel 350-T only running at 100bt (no gigabit)

JBK

Dabbler
Joined
Oct 30, 2021
Messages
46
Hello! First post here so I will try and follow the guidelines as best as I can ...

SYSTEM:
* Dell R740xd, 64G ram, Single silver proc
* Intel 350-T 4 port gig (built-in/on-board)
* TrueNAS 12.0-U6
* Cisco 2960L gigabit switch (IOS 15.1)
* CAT-6 network (short run)
* TrueNAS shows 4 igb interfaces

TSHOOT:
* All cables changed (system/patch/switch) twice
* Switch cleaned and tested (different ports), speed and duplex set to auto
* Tried all 4 ports on the card (and switch) and all come up at 100BT
* iDRAC is on the same switch and running 1gig (dedicated NIC)

No matter what I try the system will not run at 1000BT, it insists on 100BT. I scoured the forums and found a couple of issues with this and tried all recommendations. I noticed the FreeBSD suggests the em driver for for the 350-T model NIC but before I went storming into kernel mods and driver changes I just wanted to get a consensus from the public. This is causing lag in our production environment.

Funny thing is I have two identical systems (basically) and one is working fine - only difference is that the original one was installed with 12.0-U1 and upgraded. The new one was installed with 12.5-U5 and then upgraded. That is the one that won't see the 350-T as gigabit. It did not run gigabit on U5 either, I upgraded to try and fix (and be current).

Any hints would be great if you have seen this before.

Thank you!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,398
Have you swapped cables? Usually a cable that won't support 1G only has continuity 4 out of the 8 wires.
 

JBK

Dabbler
Joined
Oct 30, 2021
Messages
46
Yes. Based on previous issues I found in posts of the forum I have replaced all cables twice all the way through patch/switch/system.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Inspect the sockets for bent pins. It is extremely common for someone to accidentally bend a pin, sometimes because a crap-grade cable got jammed in a port that wasn't properly crimped.
 

JBK

Dabbler
Joined
Oct 30, 2021
Messages
46
I will check, but I do not think this could be an issue. We are Gov't funded and we have to meet certain standards so the director mandated that we buy all equipment and parts only from reputable suppliers. We do not make our own. We have flukes (and we tested the cables) if we did, but I have not made my own cables for 25 years. These are all certified CAT-6. With all this talk about cables I am thinking of dropping to CAT-5e just to test.

And the fact that we have replaced the cables (all the way through) 3 times on 4 ports (including switch/patch/system) - changing switch ports every time. And the fact that the iDrac card is running 1000bt (and we changed those cables at the same time for continuity). Like I said in my original post we made sure to try everything obvious deep down so that we would not be embarrassed with a problem that was just floating on the surface. :frown:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I will check, but I do not think this could be an issue. We are Gov't funded and we have to meet certain standards so the director mandated that we buy all equipment and parts only from reputable suppliers. We do not make our own. We have flukes (and we tested the cables) if we did, but I have not made my own cables for 25 years. These are all certified CAT-6. With all this talk about cables I am thinking of dropping to CAT-5e just to test.

Most network cables are made in Asia, and BECAUSE you didn't make your own, the quality control is an unknown. It really only takes one pin to have not been crimped properly to cause permanent damage to an ethernet jack. Various direct stresses such as someone tripping over a cable also make this sort of damage very common especially for laptops. Sometimes port damage is very obvious:

gen-ethernet-port-damage-2


I've seen cables from various "reputable" manufacturers come in with defects, and even if the cables you are plugging in TODAY are fine, damage previously done to a port can make the connection fail.

What @Samuel Tai said is correct but perhaps not expansive enough.

If you remember the switch from 10Mbps to 100Mbps ethernet, you probably remember the cable needing to go from Cat3 to Cat5, with the much tighter twists. This is because the biggest component of this change was a frequency increase on the twisted pair. If we consider TIA-568B, classic 10/100Mbps ethernet is carried on the orange and green pair, pins 1, 2, 3, and 6. Failure of any of these pins will result in NO ETHERNET at all. For pins 4, 5, 7, and 8 (blue and brown pair), these are unused by 10/100Mbps.

However, we were very close to the practical limits of twisted pair cable with 100Mbps, and it wasn't possible to simply multiply the speed by x10 again to get to gigabit. So, instead, gigabit added the other two pair, and also made all four pairs bidirectional, which effectively made the same common 100Mbps-capable Category cable to go 4x faster than 100Mbps without bumping the frequency. But both frequency and line encodings changed as well. That's how we got from 100Mbps to 1Gbps without as big a change in cabling as we did when the world went from 10->100Mbps.

But now back to what @Samuel Tai said. To be completely accurate, you need good connectivity on pins 4, 5, 7, and 8.. The failure of any one of these pins makes gigabit negotiation impossible, but as long as all of 1, 2, 3, and 6 are still good, the connection will pass as a 100Mbps connection.

Now, here's where it gets fun. It used to be that interoperability between 100Mbps devices wasn't a given. 1G has brought us into a golden era of 99%++ autonegotiation success, eliminating the need for crossover cables or fixed speed/duplex configurations (thank you lord!). But sometimes something still goes wrong. You're reporting something has gone wrong. So it is worth checking all the likely culprits. Fortunately you have a tester that you might not have even known about.

Take a short Cat6 cable. On the FreeNAS, do "ifconfig igb0 up" for each of igb0, 1, 2, and 3. This will cause all four ports to attempt to link. Now here's the trick: connect igb0 to igb1 with the Cat6 cable, and then run "ifconfig igb0" to see what speed it links at. Yes, hook the server up TO ITSELF. And then inspect the link results. Iterate through port combos for 0->1, 0->2, 0->3, and maybe a few other random ones. If you see ANY that fail to negotiate at 1000/full, inspect super-carefully for port damage. ANY failures to negotiate in this scenario indicate hardware damage, but of course it isn't clear if it's the cable or the ethernet ports. Note that I am more paranoid and I will WIGGLE the connectors while latched. This should not result in any change of state. I have absolutely seen this to be a problem on probably something like maybe a dozen ports out of a thousand, so, rare, but not that rare.

Next, test the longer cables you are using to hook up to the switch, in the same way.

If this all works out, then we have to look at more obscure things. If these are ports built into a Dell system board, for example, make sure that PoE and energy saving ethernet are disabled on the switch side, autonegotiation is enabled. On the Dell side, make sure that you don't have any wake-on-LAN configuration in the BIOS, that the ILO is actually configured to only use the dedicated port, and any PXE configurations are not trying to set up ethernet port options or anything silly. I know it sounds silly, but sometimes really obscure vendor "features" cause problems like this.
 

JBK

Dabbler
Joined
Oct 30, 2021
Messages
46
After all the work and testing and troubleshooting I find out that the system is running through a patch panel that is >25 years old. We found a port that allows us to run gig and now we are having to test and possibly replace the entire panel. Then you all for your persistent help.
 

JBK

Dabbler
Joined
Oct 30, 2021
Messages
46
That should say THANK YOU ALL for your persistent help.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
After all the work and testing and troubleshooting I find out that the system is running through a patch panel that is >25 years old.

I admit I didn't see *that* coming, but that's because you said you had been replacing all the cables and I took your mention of "patch" to mean "patch cable".

Nevertheless, @Samuel Tai and I both jumped immediately onto the cabling as a fault, and that's mostly because the Cisco and Intel parts are some of the best stuff you can have, and problems with genuine Cisco/Intel combos is basically unheard-of. Ah, "properly wired" that is. ;-)

We found a port that allows us to run gig and now we are having to test and possibly replace the entire panel. Then you all for your persistent help.

At more than 25 years old, that places you in an era where Cat3 cabling was still commonplace, and if it was Cat5, it was probably early 100MHz cable, rather than the 250 pre-Cat5e that can sustain GbE over distance. It is entirely possible that all eight pairs are connected but out of timing tolerance enough to fail 1G negotiation. 350MHz Cat5e didn't really come into common usage until maybe 2000, when we all saw gigabit coming down the road. So I think you're definitely looking at wholesale replacement unless this is never going to see any further "new servers" etc. Even if it seems to be working on "this one port" you found, expect that it is probably right at the bleeding edge of tolerance, and may be susceptible to packet loss, retransmits, etc.

If you can, push for Cat6A or Cat7 rated plant capable of handling 10GbE. This generally requires 23ga copper, which works out well for PoE applications for modern access points and other powered devices.

Happy you found your issue and I appreciate that you reported back. Hopefully you can get this replaced and be ready for the next decade or two.
 

JBK

Dabbler
Joined
Oct 30, 2021
Messages
46
Yes, I already have approval to replace the panel(s) completely. It will only grow. I inherited this mess and am learning something new every day. Thank you for everything.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Yes, I already have approval to replace the panel(s) completely. It will only grow. I inherited this mess and am learning something new every day. Thank you for everything.

I actually hang around here because it is my goal to teach someone something new every day, and this kind of problem is ripe for all sorts of possible issues and learning. I hope I managed to pass on some useful knowledge. Good luck to you!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,599
@JBK - One thing I found useful at a prior job, (at the old StorageTek headquarters), is a good, up to date Data Center map. Both of the rack locations and what is in each rack. Sun Microsystems even had floor loading layouts in one building when they had to move the StorageTek servers into a new building. (Those huge tape libraries can weigh many tons.)

I'd assume you would add the new cables & patch panel(s), before de-comming the old cables & patch panel(s). This can lead to left over cables, (if the cable runner does not remove them). So, having some documentation on those cables, both old and new, can be useful.
 
Top